00:00:00.000 Started by upstream project "autotest-per-patch" build number 132845 00:00:00.000 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.061 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.062 The recommended git tool is: git 00:00:00.062 using credential 00000000-0000-0000-0000-000000000002 00:00:00.064 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.099 Fetching changes from the remote Git repository 00:00:00.101 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.167 Using shallow fetch with depth 1 00:00:00.167 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.167 > git --version # timeout=10 00:00:00.205 > git --version # 'git version 2.39.2' 00:00:00.206 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.234 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.234 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.960 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.972 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.987 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:06.987 > git config core.sparsecheckout # timeout=10 00:00:07.001 > git read-tree -mu HEAD # timeout=10 00:00:07.017 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:07.041 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:07.041 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:07.143 [Pipeline] Start of Pipeline 00:00:07.153 [Pipeline] library 00:00:07.154 Loading library shm_lib@master 00:00:07.154 Library shm_lib@master is cached. Copying from home. 00:00:07.171 [Pipeline] node 00:00:07.183 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:07.185 [Pipeline] { 00:00:07.193 [Pipeline] catchError 00:00:07.194 [Pipeline] { 00:00:07.205 [Pipeline] wrap 00:00:07.212 [Pipeline] { 00:00:07.220 [Pipeline] stage 00:00:07.221 [Pipeline] { (Prologue) 00:00:07.429 [Pipeline] sh 00:00:07.720 + logger -p user.info -t JENKINS-CI 00:00:07.748 [Pipeline] echo 00:00:07.749 Node: GP11 00:00:07.754 [Pipeline] sh 00:00:08.051 [Pipeline] setCustomBuildProperty 00:00:08.060 [Pipeline] echo 00:00:08.061 Cleanup processes 00:00:08.065 [Pipeline] sh 00:00:08.350 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.350 468299 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.364 [Pipeline] sh 00:00:08.748 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.748 ++ grep -v 'sudo pgrep' 00:00:08.748 ++ awk '{print $1}' 00:00:08.748 + sudo kill -9 00:00:08.748 + true 00:00:08.762 [Pipeline] cleanWs 00:00:08.770 [WS-CLEANUP] Deleting project workspace... 00:00:08.770 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.777 [WS-CLEANUP] done 00:00:08.781 [Pipeline] setCustomBuildProperty 00:00:08.794 [Pipeline] sh 00:00:09.078 + sudo git config --global --replace-all safe.directory '*' 00:00:09.166 [Pipeline] httpRequest 00:00:09.673 [Pipeline] echo 00:00:09.675 Sorcerer 10.211.164.20 is alive 00:00:09.684 [Pipeline] retry 00:00:09.686 [Pipeline] { 00:00:09.699 [Pipeline] httpRequest 00:00:09.704 HttpMethod: GET 00:00:09.704 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.705 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.709 Response Code: HTTP/1.1 200 OK 00:00:09.709 Success: Status code 200 is in the accepted range: 200,404 00:00:09.710 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.667 [Pipeline] } 00:00:10.685 [Pipeline] // retry 00:00:10.692 [Pipeline] sh 00:00:10.982 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.999 [Pipeline] httpRequest 00:00:11.531 [Pipeline] echo 00:00:11.532 Sorcerer 10.211.164.20 is alive 00:00:11.542 [Pipeline] retry 00:00:11.544 [Pipeline] { 00:00:11.558 [Pipeline] httpRequest 00:00:11.563 HttpMethod: GET 00:00:11.563 URL: http://10.211.164.20/packages/spdk_3aefe42284bc282ed1b542a9c85f65f7f06a8820.tar.gz 00:00:11.564 Sending request to url: http://10.211.164.20/packages/spdk_3aefe42284bc282ed1b542a9c85f65f7f06a8820.tar.gz 00:00:11.583 Response Code: HTTP/1.1 200 OK 00:00:11.583 Success: Status code 200 is in the accepted range: 200,404 00:00:11.584 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_3aefe42284bc282ed1b542a9c85f65f7f06a8820.tar.gz 00:01:15.469 [Pipeline] } 00:01:15.488 [Pipeline] // retry 00:01:15.496 [Pipeline] sh 00:01:15.788 + tar --no-same-owner -xf spdk_3aefe42284bc282ed1b542a9c85f65f7f06a8820.tar.gz 00:01:18.335 [Pipeline] sh 00:01:18.623 + git -C spdk log --oneline -n5 00:01:18.623 3aefe4228 mk/spdk.common.mk Use pattern substitution instead of prefix removal 00:01:18.623 2104eacf0 test/check_so_deps: use VERSION to look for prior tags 00:01:18.623 66289a6db build: use VERSION file for storing version 00:01:18.623 626389917 nvme/rdma: Don't limit max_sge if UMR is used 00:01:18.623 cec5ba284 nvme/rdma: Register UMR per IO request 00:01:18.635 [Pipeline] } 00:01:18.650 [Pipeline] // stage 00:01:18.661 [Pipeline] stage 00:01:18.663 [Pipeline] { (Prepare) 00:01:18.682 [Pipeline] writeFile 00:01:18.699 [Pipeline] sh 00:01:18.986 + logger -p user.info -t JENKINS-CI 00:01:19.001 [Pipeline] sh 00:01:19.290 + logger -p user.info -t JENKINS-CI 00:01:19.303 [Pipeline] sh 00:01:19.590 + cat autorun-spdk.conf 00:01:19.590 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:19.590 SPDK_TEST_NVMF=1 00:01:19.590 SPDK_TEST_NVME_CLI=1 00:01:19.590 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:19.590 SPDK_TEST_NVMF_NICS=e810 00:01:19.590 SPDK_TEST_VFIOUSER=1 00:01:19.590 SPDK_RUN_UBSAN=1 00:01:19.590 NET_TYPE=phy 00:01:19.598 RUN_NIGHTLY=0 00:01:19.603 [Pipeline] readFile 00:01:19.631 [Pipeline] withEnv 00:01:19.633 [Pipeline] { 00:01:19.648 [Pipeline] sh 00:01:19.939 + set -ex 00:01:19.939 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:19.939 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:19.939 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:19.939 ++ SPDK_TEST_NVMF=1 00:01:19.939 ++ SPDK_TEST_NVME_CLI=1 00:01:19.939 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:19.939 ++ SPDK_TEST_NVMF_NICS=e810 00:01:19.939 ++ SPDK_TEST_VFIOUSER=1 00:01:19.939 ++ SPDK_RUN_UBSAN=1 00:01:19.939 ++ NET_TYPE=phy 00:01:19.939 ++ RUN_NIGHTLY=0 00:01:19.939 + case $SPDK_TEST_NVMF_NICS in 00:01:19.939 + DRIVERS=ice 00:01:19.939 + [[ tcp == \r\d\m\a ]] 00:01:19.939 + [[ -n ice ]] 00:01:19.939 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:19.939 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:19.939 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:19.939 rmmod: ERROR: Module irdma is not currently loaded 00:01:19.939 rmmod: ERROR: Module i40iw is not currently loaded 00:01:19.939 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:19.939 + true 00:01:19.939 + for D in $DRIVERS 00:01:19.940 + sudo modprobe ice 00:01:19.940 + exit 0 00:01:19.950 [Pipeline] } 00:01:19.965 [Pipeline] // withEnv 00:01:19.971 [Pipeline] } 00:01:19.985 [Pipeline] // stage 00:01:19.995 [Pipeline] catchError 00:01:19.996 [Pipeline] { 00:01:20.010 [Pipeline] timeout 00:01:20.010 Timeout set to expire in 1 hr 0 min 00:01:20.012 [Pipeline] { 00:01:20.026 [Pipeline] stage 00:01:20.028 [Pipeline] { (Tests) 00:01:20.043 [Pipeline] sh 00:01:20.331 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:20.332 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:20.332 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:20.332 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:20.332 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:20.332 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:20.332 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:20.332 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:20.332 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:20.332 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:20.332 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:20.332 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:20.332 + source /etc/os-release 00:01:20.332 ++ NAME='Fedora Linux' 00:01:20.332 ++ VERSION='39 (Cloud Edition)' 00:01:20.332 ++ ID=fedora 00:01:20.332 ++ VERSION_ID=39 00:01:20.332 ++ VERSION_CODENAME= 00:01:20.332 ++ PLATFORM_ID=platform:f39 00:01:20.332 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:20.332 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:20.332 ++ LOGO=fedora-logo-icon 00:01:20.332 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:20.332 ++ HOME_URL=https://fedoraproject.org/ 00:01:20.332 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:20.332 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:20.332 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:20.332 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:20.332 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:20.332 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:20.332 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:20.332 ++ SUPPORT_END=2024-11-12 00:01:20.332 ++ VARIANT='Cloud Edition' 00:01:20.332 ++ VARIANT_ID=cloud 00:01:20.332 + uname -a 00:01:20.332 Linux spdk-gp-11 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:20.332 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:21.270 Hugepages 00:01:21.270 node hugesize free / total 00:01:21.270 node0 1048576kB 0 / 0 00:01:21.270 node0 2048kB 0 / 0 00:01:21.270 node1 1048576kB 0 / 0 00:01:21.270 node1 2048kB 0 / 0 00:01:21.270 00:01:21.270 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:21.270 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:21.530 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:21.530 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:21.530 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:21.530 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:21.530 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:21.530 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:21.530 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:21.530 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:21.530 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:21.530 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:21.530 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:21.530 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:21.530 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:21.530 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:21.530 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:21.530 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:21.530 + rm -f /tmp/spdk-ld-path 00:01:21.530 + source autorun-spdk.conf 00:01:21.530 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:21.530 ++ SPDK_TEST_NVMF=1 00:01:21.530 ++ SPDK_TEST_NVME_CLI=1 00:01:21.530 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:21.530 ++ SPDK_TEST_NVMF_NICS=e810 00:01:21.530 ++ SPDK_TEST_VFIOUSER=1 00:01:21.530 ++ SPDK_RUN_UBSAN=1 00:01:21.530 ++ NET_TYPE=phy 00:01:21.530 ++ RUN_NIGHTLY=0 00:01:21.530 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:21.530 + [[ -n '' ]] 00:01:21.530 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:21.530 + for M in /var/spdk/build-*-manifest.txt 00:01:21.530 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:21.530 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:21.530 + for M in /var/spdk/build-*-manifest.txt 00:01:21.530 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:21.530 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:21.530 + for M in /var/spdk/build-*-manifest.txt 00:01:21.530 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:21.530 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:21.530 ++ uname 00:01:21.530 + [[ Linux == \L\i\n\u\x ]] 00:01:21.530 + sudo dmesg -T 00:01:21.530 + sudo dmesg --clear 00:01:21.530 + dmesg_pid=468977 00:01:21.530 + [[ Fedora Linux == FreeBSD ]] 00:01:21.530 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:21.530 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:21.530 + sudo dmesg -Tw 00:01:21.530 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:21.530 + [[ -x /usr/src/fio-static/fio ]] 00:01:21.530 + export FIO_BIN=/usr/src/fio-static/fio 00:01:21.530 + FIO_BIN=/usr/src/fio-static/fio 00:01:21.530 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:21.530 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:21.530 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:21.530 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:21.530 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:21.530 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:21.530 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:21.530 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:21.530 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:21.530 14:38:04 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:21.530 14:38:04 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:21.530 14:38:04 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:21.530 14:38:04 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:21.530 14:38:04 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:21.530 14:38:04 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:21.530 14:38:04 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:01:21.530 14:38:04 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:01:21.530 14:38:04 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:01:21.530 14:38:04 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:01:21.530 14:38:04 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:01:21.530 14:38:04 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:21.530 14:38:04 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:21.789 14:38:04 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:21.789 14:38:04 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:21.789 14:38:04 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:21.789 14:38:04 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:21.789 14:38:04 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:21.789 14:38:04 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:21.789 14:38:04 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:21.789 14:38:04 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:21.789 14:38:04 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:21.789 14:38:04 -- paths/export.sh@5 -- $ export PATH 00:01:21.789 14:38:04 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:21.789 14:38:04 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:21.789 14:38:04 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:21.789 14:38:04 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733924284.XXXXXX 00:01:21.789 14:38:04 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733924284.aKJBXR 00:01:21.789 14:38:04 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:21.789 14:38:04 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:21.789 14:38:04 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:21.789 14:38:04 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:21.789 14:38:04 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:21.789 14:38:04 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:21.789 14:38:04 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:21.789 14:38:04 -- common/autotest_common.sh@10 -- $ set +x 00:01:21.789 14:38:04 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:21.789 14:38:04 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:21.789 14:38:04 -- pm/common@17 -- $ local monitor 00:01:21.789 14:38:04 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:21.790 14:38:04 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:21.790 14:38:04 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:21.790 14:38:04 -- pm/common@21 -- $ date +%s 00:01:21.790 14:38:04 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:21.790 14:38:04 -- pm/common@21 -- $ date +%s 00:01:21.790 14:38:04 -- pm/common@25 -- $ sleep 1 00:01:21.790 14:38:04 -- pm/common@21 -- $ date +%s 00:01:21.790 14:38:04 -- pm/common@21 -- $ date +%s 00:01:21.790 14:38:04 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733924284 00:01:21.790 14:38:04 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733924284 00:01:21.790 14:38:04 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733924284 00:01:21.790 14:38:04 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733924284 00:01:21.790 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733924284_collect-cpu-load.pm.log 00:01:21.790 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733924284_collect-vmstat.pm.log 00:01:21.790 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733924284_collect-cpu-temp.pm.log 00:01:21.790 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733924284_collect-bmc-pm.bmc.pm.log 00:01:22.729 14:38:05 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:22.729 14:38:05 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:22.729 14:38:05 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:22.729 14:38:05 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:22.729 14:38:05 -- spdk/autobuild.sh@16 -- $ date -u 00:01:22.729 Wed Dec 11 01:38:05 PM UTC 2024 00:01:22.729 14:38:05 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:22.729 v25.01-rc1-1-g3aefe4228 00:01:22.729 14:38:05 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:22.729 14:38:05 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:22.729 14:38:05 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:22.729 14:38:05 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:22.729 14:38:05 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:22.729 14:38:05 -- common/autotest_common.sh@10 -- $ set +x 00:01:22.729 ************************************ 00:01:22.729 START TEST ubsan 00:01:22.729 ************************************ 00:01:22.729 14:38:05 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:22.729 using ubsan 00:01:22.729 00:01:22.729 real 0m0.000s 00:01:22.729 user 0m0.000s 00:01:22.729 sys 0m0.000s 00:01:22.729 14:38:05 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:22.729 14:38:05 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:22.729 ************************************ 00:01:22.729 END TEST ubsan 00:01:22.729 ************************************ 00:01:22.729 14:38:05 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:22.729 14:38:05 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:22.729 14:38:05 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:22.729 14:38:05 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:22.729 14:38:05 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:22.729 14:38:05 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:22.729 14:38:05 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:22.729 14:38:05 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:22.729 14:38:05 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:22.729 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:22.729 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:22.989 Using 'verbs' RDMA provider 00:01:33.919 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:43.896 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:43.896 Creating mk/config.mk...done. 00:01:43.896 Creating mk/cc.flags.mk...done. 00:01:43.896 Type 'make' to build. 00:01:43.896 14:38:26 -- spdk/autobuild.sh@70 -- $ run_test make make -j48 00:01:43.896 14:38:26 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:43.896 14:38:26 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:43.896 14:38:26 -- common/autotest_common.sh@10 -- $ set +x 00:01:43.896 ************************************ 00:01:43.896 START TEST make 00:01:43.896 ************************************ 00:01:43.896 14:38:26 make -- common/autotest_common.sh@1129 -- $ make -j48 00:01:45.813 The Meson build system 00:01:45.813 Version: 1.5.0 00:01:45.813 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:45.813 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:45.813 Build type: native build 00:01:45.813 Project name: libvfio-user 00:01:45.813 Project version: 0.0.1 00:01:45.813 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:45.813 C linker for the host machine: cc ld.bfd 2.40-14 00:01:45.813 Host machine cpu family: x86_64 00:01:45.813 Host machine cpu: x86_64 00:01:45.813 Run-time dependency threads found: YES 00:01:45.813 Library dl found: YES 00:01:45.813 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:45.813 Run-time dependency json-c found: YES 0.17 00:01:45.813 Run-time dependency cmocka found: YES 1.1.7 00:01:45.813 Program pytest-3 found: NO 00:01:45.813 Program flake8 found: NO 00:01:45.813 Program misspell-fixer found: NO 00:01:45.813 Program restructuredtext-lint found: NO 00:01:45.813 Program valgrind found: YES (/usr/bin/valgrind) 00:01:45.813 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:45.813 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:45.813 Compiler for C supports arguments -Wwrite-strings: YES 00:01:45.813 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:45.813 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:45.813 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:45.813 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:45.813 Build targets in project: 8 00:01:45.813 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:45.813 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:45.813 00:01:45.813 libvfio-user 0.0.1 00:01:45.813 00:01:45.813 User defined options 00:01:45.813 buildtype : debug 00:01:45.813 default_library: shared 00:01:45.813 libdir : /usr/local/lib 00:01:45.813 00:01:45.813 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:46.759 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:47.023 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:47.023 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:47.023 [3/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:47.023 [4/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:47.023 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:47.023 [6/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:47.023 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:47.023 [8/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:47.023 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:47.023 [10/37] Compiling C object samples/null.p/null.c.o 00:01:47.023 [11/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:47.023 [12/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:47.023 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:47.023 [14/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:47.023 [15/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:47.023 [16/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:47.023 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:47.023 [18/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:47.023 [19/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:47.023 [20/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:47.023 [21/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:47.023 [22/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:47.023 [23/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:47.023 [24/37] Compiling C object samples/server.p/server.c.o 00:01:47.023 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:47.023 [26/37] Compiling C object samples/client.p/client.c.o 00:01:47.288 [27/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:47.288 [28/37] Linking target samples/client 00:01:47.288 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:01:47.288 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:47.288 [31/37] Linking target test/unit_tests 00:01:47.558 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:47.558 [33/37] Linking target samples/null 00:01:47.558 [34/37] Linking target samples/server 00:01:47.558 [35/37] Linking target samples/gpio-pci-idio-16 00:01:47.558 [36/37] Linking target samples/lspci 00:01:47.558 [37/37] Linking target samples/shadow_ioeventfd_server 00:01:47.558 INFO: autodetecting backend as ninja 00:01:47.558 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:47.558 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:48.499 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:48.499 ninja: no work to do. 00:01:53.768 The Meson build system 00:01:53.768 Version: 1.5.0 00:01:53.768 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:53.768 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:53.768 Build type: native build 00:01:53.768 Program cat found: YES (/usr/bin/cat) 00:01:53.768 Project name: DPDK 00:01:53.768 Project version: 24.03.0 00:01:53.768 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:53.768 C linker for the host machine: cc ld.bfd 2.40-14 00:01:53.768 Host machine cpu family: x86_64 00:01:53.768 Host machine cpu: x86_64 00:01:53.768 Message: ## Building in Developer Mode ## 00:01:53.768 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:53.768 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:53.768 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:53.768 Program python3 found: YES (/usr/bin/python3) 00:01:53.768 Program cat found: YES (/usr/bin/cat) 00:01:53.768 Compiler for C supports arguments -march=native: YES 00:01:53.768 Checking for size of "void *" : 8 00:01:53.768 Checking for size of "void *" : 8 (cached) 00:01:53.768 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:01:53.768 Library m found: YES 00:01:53.768 Library numa found: YES 00:01:53.768 Has header "numaif.h" : YES 00:01:53.768 Library fdt found: NO 00:01:53.768 Library execinfo found: NO 00:01:53.768 Has header "execinfo.h" : YES 00:01:53.768 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:53.768 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:53.768 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:53.768 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:53.768 Run-time dependency openssl found: YES 3.1.1 00:01:53.768 Run-time dependency libpcap found: YES 1.10.4 00:01:53.768 Has header "pcap.h" with dependency libpcap: YES 00:01:53.768 Compiler for C supports arguments -Wcast-qual: YES 00:01:53.768 Compiler for C supports arguments -Wdeprecated: YES 00:01:53.768 Compiler for C supports arguments -Wformat: YES 00:01:53.768 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:53.768 Compiler for C supports arguments -Wformat-security: NO 00:01:53.768 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:53.768 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:53.768 Compiler for C supports arguments -Wnested-externs: YES 00:01:53.768 Compiler for C supports arguments -Wold-style-definition: YES 00:01:53.768 Compiler for C supports arguments -Wpointer-arith: YES 00:01:53.768 Compiler for C supports arguments -Wsign-compare: YES 00:01:53.768 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:53.768 Compiler for C supports arguments -Wundef: YES 00:01:53.768 Compiler for C supports arguments -Wwrite-strings: YES 00:01:53.768 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:53.768 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:53.768 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:53.768 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:53.768 Program objdump found: YES (/usr/bin/objdump) 00:01:53.768 Compiler for C supports arguments -mavx512f: YES 00:01:53.768 Checking if "AVX512 checking" compiles: YES 00:01:53.768 Fetching value of define "__SSE4_2__" : 1 00:01:53.769 Fetching value of define "__AES__" : 1 00:01:53.769 Fetching value of define "__AVX__" : 1 00:01:53.769 Fetching value of define "__AVX2__" : (undefined) 00:01:53.769 Fetching value of define "__AVX512BW__" : (undefined) 00:01:53.769 Fetching value of define "__AVX512CD__" : (undefined) 00:01:53.769 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:53.769 Fetching value of define "__AVX512F__" : (undefined) 00:01:53.769 Fetching value of define "__AVX512VL__" : (undefined) 00:01:53.769 Fetching value of define "__PCLMUL__" : 1 00:01:53.769 Fetching value of define "__RDRND__" : 1 00:01:53.769 Fetching value of define "__RDSEED__" : (undefined) 00:01:53.769 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:53.769 Fetching value of define "__znver1__" : (undefined) 00:01:53.769 Fetching value of define "__znver2__" : (undefined) 00:01:53.769 Fetching value of define "__znver3__" : (undefined) 00:01:53.769 Fetching value of define "__znver4__" : (undefined) 00:01:53.769 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:53.769 Message: lib/log: Defining dependency "log" 00:01:53.769 Message: lib/kvargs: Defining dependency "kvargs" 00:01:53.769 Message: lib/telemetry: Defining dependency "telemetry" 00:01:53.769 Checking for function "getentropy" : NO 00:01:53.769 Message: lib/eal: Defining dependency "eal" 00:01:53.769 Message: lib/ring: Defining dependency "ring" 00:01:53.769 Message: lib/rcu: Defining dependency "rcu" 00:01:53.769 Message: lib/mempool: Defining dependency "mempool" 00:01:53.769 Message: lib/mbuf: Defining dependency "mbuf" 00:01:53.769 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:53.769 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:53.769 Compiler for C supports arguments -mpclmul: YES 00:01:53.769 Compiler for C supports arguments -maes: YES 00:01:53.769 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:53.769 Compiler for C supports arguments -mavx512bw: YES 00:01:53.769 Compiler for C supports arguments -mavx512dq: YES 00:01:53.769 Compiler for C supports arguments -mavx512vl: YES 00:01:53.769 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:53.769 Compiler for C supports arguments -mavx2: YES 00:01:53.769 Compiler for C supports arguments -mavx: YES 00:01:53.769 Message: lib/net: Defining dependency "net" 00:01:53.769 Message: lib/meter: Defining dependency "meter" 00:01:53.769 Message: lib/ethdev: Defining dependency "ethdev" 00:01:53.769 Message: lib/pci: Defining dependency "pci" 00:01:53.769 Message: lib/cmdline: Defining dependency "cmdline" 00:01:53.769 Message: lib/hash: Defining dependency "hash" 00:01:53.769 Message: lib/timer: Defining dependency "timer" 00:01:53.769 Message: lib/compressdev: Defining dependency "compressdev" 00:01:53.769 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:53.769 Message: lib/dmadev: Defining dependency "dmadev" 00:01:53.769 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:53.769 Message: lib/power: Defining dependency "power" 00:01:53.769 Message: lib/reorder: Defining dependency "reorder" 00:01:53.769 Message: lib/security: Defining dependency "security" 00:01:53.769 Has header "linux/userfaultfd.h" : YES 00:01:53.769 Has header "linux/vduse.h" : YES 00:01:53.769 Message: lib/vhost: Defining dependency "vhost" 00:01:53.769 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:53.769 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:53.769 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:53.769 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:53.769 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:53.769 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:53.769 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:53.769 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:53.769 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:53.769 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:53.769 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:53.769 Configuring doxy-api-html.conf using configuration 00:01:53.769 Configuring doxy-api-man.conf using configuration 00:01:53.769 Program mandb found: YES (/usr/bin/mandb) 00:01:53.769 Program sphinx-build found: NO 00:01:53.769 Configuring rte_build_config.h using configuration 00:01:53.769 Message: 00:01:53.769 ================= 00:01:53.769 Applications Enabled 00:01:53.769 ================= 00:01:53.769 00:01:53.769 apps: 00:01:53.769 00:01:53.769 00:01:53.769 Message: 00:01:53.769 ================= 00:01:53.769 Libraries Enabled 00:01:53.769 ================= 00:01:53.769 00:01:53.769 libs: 00:01:53.769 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:53.769 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:53.769 cryptodev, dmadev, power, reorder, security, vhost, 00:01:53.769 00:01:53.769 Message: 00:01:53.769 =============== 00:01:53.769 Drivers Enabled 00:01:53.769 =============== 00:01:53.769 00:01:53.769 common: 00:01:53.769 00:01:53.769 bus: 00:01:53.769 pci, vdev, 00:01:53.769 mempool: 00:01:53.769 ring, 00:01:53.769 dma: 00:01:53.769 00:01:53.769 net: 00:01:53.769 00:01:53.769 crypto: 00:01:53.769 00:01:53.769 compress: 00:01:53.769 00:01:53.769 vdpa: 00:01:53.769 00:01:53.769 00:01:53.769 Message: 00:01:53.769 ================= 00:01:53.769 Content Skipped 00:01:53.769 ================= 00:01:53.769 00:01:53.769 apps: 00:01:53.769 dumpcap: explicitly disabled via build config 00:01:53.769 graph: explicitly disabled via build config 00:01:53.769 pdump: explicitly disabled via build config 00:01:53.769 proc-info: explicitly disabled via build config 00:01:53.769 test-acl: explicitly disabled via build config 00:01:53.769 test-bbdev: explicitly disabled via build config 00:01:53.769 test-cmdline: explicitly disabled via build config 00:01:53.769 test-compress-perf: explicitly disabled via build config 00:01:53.769 test-crypto-perf: explicitly disabled via build config 00:01:53.769 test-dma-perf: explicitly disabled via build config 00:01:53.769 test-eventdev: explicitly disabled via build config 00:01:53.769 test-fib: explicitly disabled via build config 00:01:53.769 test-flow-perf: explicitly disabled via build config 00:01:53.769 test-gpudev: explicitly disabled via build config 00:01:53.769 test-mldev: explicitly disabled via build config 00:01:53.769 test-pipeline: explicitly disabled via build config 00:01:53.769 test-pmd: explicitly disabled via build config 00:01:53.769 test-regex: explicitly disabled via build config 00:01:53.769 test-sad: explicitly disabled via build config 00:01:53.769 test-security-perf: explicitly disabled via build config 00:01:53.769 00:01:53.769 libs: 00:01:53.769 argparse: explicitly disabled via build config 00:01:53.769 metrics: explicitly disabled via build config 00:01:53.769 acl: explicitly disabled via build config 00:01:53.769 bbdev: explicitly disabled via build config 00:01:53.769 bitratestats: explicitly disabled via build config 00:01:53.769 bpf: explicitly disabled via build config 00:01:53.769 cfgfile: explicitly disabled via build config 00:01:53.769 distributor: explicitly disabled via build config 00:01:53.769 efd: explicitly disabled via build config 00:01:53.769 eventdev: explicitly disabled via build config 00:01:53.769 dispatcher: explicitly disabled via build config 00:01:53.769 gpudev: explicitly disabled via build config 00:01:53.769 gro: explicitly disabled via build config 00:01:53.769 gso: explicitly disabled via build config 00:01:53.769 ip_frag: explicitly disabled via build config 00:01:53.769 jobstats: explicitly disabled via build config 00:01:53.769 latencystats: explicitly disabled via build config 00:01:53.769 lpm: explicitly disabled via build config 00:01:53.769 member: explicitly disabled via build config 00:01:53.769 pcapng: explicitly disabled via build config 00:01:53.769 rawdev: explicitly disabled via build config 00:01:53.769 regexdev: explicitly disabled via build config 00:01:53.769 mldev: explicitly disabled via build config 00:01:53.769 rib: explicitly disabled via build config 00:01:53.769 sched: explicitly disabled via build config 00:01:53.769 stack: explicitly disabled via build config 00:01:53.769 ipsec: explicitly disabled via build config 00:01:53.769 pdcp: explicitly disabled via build config 00:01:53.769 fib: explicitly disabled via build config 00:01:53.769 port: explicitly disabled via build config 00:01:53.769 pdump: explicitly disabled via build config 00:01:53.769 table: explicitly disabled via build config 00:01:53.769 pipeline: explicitly disabled via build config 00:01:53.769 graph: explicitly disabled via build config 00:01:53.769 node: explicitly disabled via build config 00:01:53.769 00:01:53.769 drivers: 00:01:53.769 common/cpt: not in enabled drivers build config 00:01:53.769 common/dpaax: not in enabled drivers build config 00:01:53.769 common/iavf: not in enabled drivers build config 00:01:53.769 common/idpf: not in enabled drivers build config 00:01:53.769 common/ionic: not in enabled drivers build config 00:01:53.769 common/mvep: not in enabled drivers build config 00:01:53.769 common/octeontx: not in enabled drivers build config 00:01:53.769 bus/auxiliary: not in enabled drivers build config 00:01:53.769 bus/cdx: not in enabled drivers build config 00:01:53.769 bus/dpaa: not in enabled drivers build config 00:01:53.769 bus/fslmc: not in enabled drivers build config 00:01:53.769 bus/ifpga: not in enabled drivers build config 00:01:53.769 bus/platform: not in enabled drivers build config 00:01:53.769 bus/uacce: not in enabled drivers build config 00:01:53.769 bus/vmbus: not in enabled drivers build config 00:01:53.769 common/cnxk: not in enabled drivers build config 00:01:53.769 common/mlx5: not in enabled drivers build config 00:01:53.769 common/nfp: not in enabled drivers build config 00:01:53.769 common/nitrox: not in enabled drivers build config 00:01:53.769 common/qat: not in enabled drivers build config 00:01:53.769 common/sfc_efx: not in enabled drivers build config 00:01:53.769 mempool/bucket: not in enabled drivers build config 00:01:53.769 mempool/cnxk: not in enabled drivers build config 00:01:53.769 mempool/dpaa: not in enabled drivers build config 00:01:53.769 mempool/dpaa2: not in enabled drivers build config 00:01:53.769 mempool/octeontx: not in enabled drivers build config 00:01:53.769 mempool/stack: not in enabled drivers build config 00:01:53.769 dma/cnxk: not in enabled drivers build config 00:01:53.769 dma/dpaa: not in enabled drivers build config 00:01:53.769 dma/dpaa2: not in enabled drivers build config 00:01:53.769 dma/hisilicon: not in enabled drivers build config 00:01:53.769 dma/idxd: not in enabled drivers build config 00:01:53.769 dma/ioat: not in enabled drivers build config 00:01:53.770 dma/skeleton: not in enabled drivers build config 00:01:53.770 net/af_packet: not in enabled drivers build config 00:01:53.770 net/af_xdp: not in enabled drivers build config 00:01:53.770 net/ark: not in enabled drivers build config 00:01:53.770 net/atlantic: not in enabled drivers build config 00:01:53.770 net/avp: not in enabled drivers build config 00:01:53.770 net/axgbe: not in enabled drivers build config 00:01:53.770 net/bnx2x: not in enabled drivers build config 00:01:53.770 net/bnxt: not in enabled drivers build config 00:01:53.770 net/bonding: not in enabled drivers build config 00:01:53.770 net/cnxk: not in enabled drivers build config 00:01:53.770 net/cpfl: not in enabled drivers build config 00:01:53.770 net/cxgbe: not in enabled drivers build config 00:01:53.770 net/dpaa: not in enabled drivers build config 00:01:53.770 net/dpaa2: not in enabled drivers build config 00:01:53.770 net/e1000: not in enabled drivers build config 00:01:53.770 net/ena: not in enabled drivers build config 00:01:53.770 net/enetc: not in enabled drivers build config 00:01:53.770 net/enetfec: not in enabled drivers build config 00:01:53.770 net/enic: not in enabled drivers build config 00:01:53.770 net/failsafe: not in enabled drivers build config 00:01:53.770 net/fm10k: not in enabled drivers build config 00:01:53.770 net/gve: not in enabled drivers build config 00:01:53.770 net/hinic: not in enabled drivers build config 00:01:53.770 net/hns3: not in enabled drivers build config 00:01:53.770 net/i40e: not in enabled drivers build config 00:01:53.770 net/iavf: not in enabled drivers build config 00:01:53.770 net/ice: not in enabled drivers build config 00:01:53.770 net/idpf: not in enabled drivers build config 00:01:53.770 net/igc: not in enabled drivers build config 00:01:53.770 net/ionic: not in enabled drivers build config 00:01:53.770 net/ipn3ke: not in enabled drivers build config 00:01:53.770 net/ixgbe: not in enabled drivers build config 00:01:53.770 net/mana: not in enabled drivers build config 00:01:53.770 net/memif: not in enabled drivers build config 00:01:53.770 net/mlx4: not in enabled drivers build config 00:01:53.770 net/mlx5: not in enabled drivers build config 00:01:53.770 net/mvneta: not in enabled drivers build config 00:01:53.770 net/mvpp2: not in enabled drivers build config 00:01:53.770 net/netvsc: not in enabled drivers build config 00:01:53.770 net/nfb: not in enabled drivers build config 00:01:53.770 net/nfp: not in enabled drivers build config 00:01:53.770 net/ngbe: not in enabled drivers build config 00:01:53.770 net/null: not in enabled drivers build config 00:01:53.770 net/octeontx: not in enabled drivers build config 00:01:53.770 net/octeon_ep: not in enabled drivers build config 00:01:53.770 net/pcap: not in enabled drivers build config 00:01:53.770 net/pfe: not in enabled drivers build config 00:01:53.770 net/qede: not in enabled drivers build config 00:01:53.770 net/ring: not in enabled drivers build config 00:01:53.770 net/sfc: not in enabled drivers build config 00:01:53.770 net/softnic: not in enabled drivers build config 00:01:53.770 net/tap: not in enabled drivers build config 00:01:53.770 net/thunderx: not in enabled drivers build config 00:01:53.770 net/txgbe: not in enabled drivers build config 00:01:53.770 net/vdev_netvsc: not in enabled drivers build config 00:01:53.770 net/vhost: not in enabled drivers build config 00:01:53.770 net/virtio: not in enabled drivers build config 00:01:53.770 net/vmxnet3: not in enabled drivers build config 00:01:53.770 raw/*: missing internal dependency, "rawdev" 00:01:53.770 crypto/armv8: not in enabled drivers build config 00:01:53.770 crypto/bcmfs: not in enabled drivers build config 00:01:53.770 crypto/caam_jr: not in enabled drivers build config 00:01:53.770 crypto/ccp: not in enabled drivers build config 00:01:53.770 crypto/cnxk: not in enabled drivers build config 00:01:53.770 crypto/dpaa_sec: not in enabled drivers build config 00:01:53.770 crypto/dpaa2_sec: not in enabled drivers build config 00:01:53.770 crypto/ipsec_mb: not in enabled drivers build config 00:01:53.770 crypto/mlx5: not in enabled drivers build config 00:01:53.770 crypto/mvsam: not in enabled drivers build config 00:01:53.770 crypto/nitrox: not in enabled drivers build config 00:01:53.770 crypto/null: not in enabled drivers build config 00:01:53.770 crypto/octeontx: not in enabled drivers build config 00:01:53.770 crypto/openssl: not in enabled drivers build config 00:01:53.770 crypto/scheduler: not in enabled drivers build config 00:01:53.770 crypto/uadk: not in enabled drivers build config 00:01:53.770 crypto/virtio: not in enabled drivers build config 00:01:53.770 compress/isal: not in enabled drivers build config 00:01:53.770 compress/mlx5: not in enabled drivers build config 00:01:53.770 compress/nitrox: not in enabled drivers build config 00:01:53.770 compress/octeontx: not in enabled drivers build config 00:01:53.770 compress/zlib: not in enabled drivers build config 00:01:53.770 regex/*: missing internal dependency, "regexdev" 00:01:53.770 ml/*: missing internal dependency, "mldev" 00:01:53.770 vdpa/ifc: not in enabled drivers build config 00:01:53.770 vdpa/mlx5: not in enabled drivers build config 00:01:53.770 vdpa/nfp: not in enabled drivers build config 00:01:53.770 vdpa/sfc: not in enabled drivers build config 00:01:53.770 event/*: missing internal dependency, "eventdev" 00:01:53.770 baseband/*: missing internal dependency, "bbdev" 00:01:53.770 gpu/*: missing internal dependency, "gpudev" 00:01:53.770 00:01:53.770 00:01:53.770 Build targets in project: 85 00:01:53.770 00:01:53.770 DPDK 24.03.0 00:01:53.770 00:01:53.770 User defined options 00:01:53.770 buildtype : debug 00:01:53.770 default_library : shared 00:01:53.770 libdir : lib 00:01:53.770 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:53.770 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:53.770 c_link_args : 00:01:53.770 cpu_instruction_set: native 00:01:53.770 disable_apps : test-dma-perf,test,test-sad,test-acl,test-pmd,test-mldev,test-compress-perf,test-cmdline,test-regex,test-fib,graph,test-bbdev,dumpcap,test-gpudev,proc-info,test-pipeline,test-flow-perf,test-crypto-perf,pdump,test-eventdev,test-security-perf 00:01:53.770 disable_libs : port,lpm,ipsec,regexdev,dispatcher,argparse,bitratestats,rawdev,stack,graph,acl,bbdev,pipeline,member,sched,pcapng,mldev,eventdev,efd,metrics,latencystats,cfgfile,ip_frag,jobstats,pdump,pdcp,rib,node,fib,distributor,gso,table,bpf,gpudev,gro 00:01:53.770 enable_docs : false 00:01:53.770 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:01:53.770 enable_kmods : false 00:01:53.770 max_lcores : 128 00:01:53.770 tests : false 00:01:53.770 00:01:53.770 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:53.770 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:54.032 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:54.032 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:54.032 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:54.032 [4/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:54.032 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:54.032 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:54.032 [7/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:54.032 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:54.032 [9/268] Linking static target lib/librte_kvargs.a 00:01:54.032 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:54.032 [11/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:54.032 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:54.032 [13/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:54.032 [14/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:54.032 [15/268] Linking static target lib/librte_log.a 00:01:54.032 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:54.603 [17/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.866 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:54.866 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:54.866 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:54.866 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:54.866 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:54.866 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:54.866 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:54.866 [25/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:54.866 [26/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:54.866 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:54.866 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:54.866 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:54.866 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:54.866 [31/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:54.866 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:54.866 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:54.866 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:54.866 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:54.866 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:54.866 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:54.866 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:54.866 [39/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:54.866 [40/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:54.866 [41/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:54.866 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:54.866 [43/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:54.866 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:54.866 [45/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:54.866 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:54.866 [47/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:54.866 [48/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:54.866 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:54.866 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:54.866 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:54.866 [52/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:54.866 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:55.133 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:55.133 [55/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:55.133 [56/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:55.133 [57/268] Linking static target lib/librte_telemetry.a 00:01:55.133 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:55.133 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:55.133 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:55.133 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:55.133 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:55.133 [63/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:55.133 [64/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:55.133 [65/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:55.391 [66/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:55.391 [67/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.391 [68/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:55.391 [69/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:55.391 [70/268] Linking static target lib/librte_pci.a 00:01:55.391 [71/268] Linking target lib/librte_log.so.24.1 00:01:55.654 [72/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:55.654 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:55.654 [74/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:55.654 [75/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:55.654 [76/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:55.654 [77/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:55.654 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:55.654 [79/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:55.654 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:55.917 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:55.917 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:55.917 [83/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:55.917 [84/268] Linking target lib/librte_kvargs.so.24.1 00:01:55.917 [85/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:55.917 [86/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:55.917 [87/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:55.917 [88/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:55.917 [89/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:55.917 [90/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.917 [91/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:55.917 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:55.917 [93/268] Linking static target lib/librte_ring.a 00:01:55.917 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:55.917 [95/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:55.917 [96/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:55.917 [97/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:55.917 [98/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:55.917 [99/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:55.917 [100/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:55.917 [101/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:55.917 [102/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:55.918 [103/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:55.918 [104/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:55.918 [105/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:55.918 [106/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:55.918 [107/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:55.918 [108/268] Linking static target lib/librte_meter.a 00:01:55.918 [109/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:56.179 [110/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:56.179 [111/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:56.179 [112/268] Linking static target lib/librte_eal.a 00:01:56.179 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:56.179 [114/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:56.179 [115/268] Linking static target lib/librte_mempool.a 00:01:56.179 [116/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:56.179 [117/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:56.179 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:56.179 [119/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:56.179 [120/268] Linking static target lib/librte_rcu.a 00:01:56.179 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:56.179 [122/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:56.179 [123/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.179 [124/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:56.179 [125/268] Linking target lib/librte_telemetry.so.24.1 00:01:56.179 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:56.441 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:56.441 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:56.441 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:56.441 [130/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:56.441 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:56.441 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:56.441 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:56.441 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:56.441 [135/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:56.703 [136/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.703 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:56.703 [138/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:56.703 [139/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.703 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:56.703 [141/268] Linking static target lib/librte_net.a 00:01:56.703 [142/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:56.703 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:56.703 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:56.703 [145/268] Linking static target lib/librte_cmdline.a 00:01:56.703 [146/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:56.964 [147/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:56.964 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:56.964 [149/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:56.964 [150/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.964 [151/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:56.964 [152/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:56.964 [153/268] Linking static target lib/librte_timer.a 00:01:56.964 [154/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:56.964 [155/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:56.964 [156/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:56.964 [157/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:56.964 [158/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:57.223 [159/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:57.223 [160/268] Linking static target lib/librte_dmadev.a 00:01:57.223 [161/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:57.223 [162/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.223 [163/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:57.223 [164/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:57.223 [165/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:57.223 [166/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:57.223 [167/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:57.223 [168/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.223 [169/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:57.482 [170/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:57.482 [171/268] Linking static target lib/librte_power.a 00:01:57.482 [172/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:57.482 [173/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.482 [174/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:57.482 [175/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:57.482 [176/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:57.482 [177/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:57.482 [178/268] Linking static target lib/librte_compressdev.a 00:01:57.482 [179/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:57.482 [180/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:57.482 [181/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:57.482 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:57.482 [183/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.482 [184/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:57.482 [185/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:57.482 [186/268] Linking static target lib/librte_hash.a 00:01:57.482 [187/268] Linking static target lib/librte_mbuf.a 00:01:57.741 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:57.741 [189/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.741 [190/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:57.741 [191/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:57.741 [192/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:57.741 [193/268] Linking static target lib/librte_reorder.a 00:01:57.741 [194/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:57.741 [195/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:57.741 [196/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:57.741 [197/268] Linking static target lib/librte_security.a 00:01:57.741 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:57.741 [199/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:57.741 [200/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:57.999 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:57.999 [202/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.999 [203/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.999 [204/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:57.999 [205/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:57.999 [206/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:57.999 [207/268] Linking static target drivers/librte_mempool_ring.a 00:01:57.999 [208/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.999 [209/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:57.999 [210/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:57.999 [211/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:57.999 [212/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:57.999 [213/268] Linking static target drivers/librte_bus_vdev.a 00:01:58.000 [214/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.000 [215/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:58.000 [216/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:58.000 [217/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:58.000 [218/268] Linking static target drivers/librte_bus_pci.a 00:01:58.000 [219/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.258 [220/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.258 [221/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:58.258 [222/268] Linking static target lib/librte_cryptodev.a 00:01:58.258 [223/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.258 [224/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:58.258 [225/268] Linking static target lib/librte_ethdev.a 00:01:58.516 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.451 [227/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.827 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:02.729 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.729 [230/268] Linking target lib/librte_eal.so.24.1 00:02:02.729 [231/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.729 [232/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:02.729 [233/268] Linking target lib/librte_meter.so.24.1 00:02:02.729 [234/268] Linking target lib/librte_ring.so.24.1 00:02:02.729 [235/268] Linking target lib/librte_dmadev.so.24.1 00:02:02.729 [236/268] Linking target lib/librte_timer.so.24.1 00:02:02.729 [237/268] Linking target lib/librte_pci.so.24.1 00:02:02.729 [238/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:02.729 [239/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:02.729 [240/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:02.729 [241/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:02.729 [242/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:02.729 [243/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:02.729 [244/268] Linking target lib/librte_rcu.so.24.1 00:02:02.729 [245/268] Linking target lib/librte_mempool.so.24.1 00:02:02.729 [246/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:02.987 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:02.987 [248/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:02.987 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:02.987 [250/268] Linking target lib/librte_mbuf.so.24.1 00:02:03.245 [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:03.245 [252/268] Linking target lib/librte_reorder.so.24.1 00:02:03.245 [253/268] Linking target lib/librte_compressdev.so.24.1 00:02:03.245 [254/268] Linking target lib/librte_net.so.24.1 00:02:03.245 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:02:03.245 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:03.245 [257/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:03.245 [258/268] Linking target lib/librte_security.so.24.1 00:02:03.245 [259/268] Linking target lib/librte_cmdline.so.24.1 00:02:03.245 [260/268] Linking target lib/librte_hash.so.24.1 00:02:03.245 [261/268] Linking target lib/librte_ethdev.so.24.1 00:02:03.503 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:03.503 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:03.503 [264/268] Linking target lib/librte_power.so.24.1 00:02:06.787 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:06.787 [266/268] Linking static target lib/librte_vhost.a 00:02:07.722 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.722 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:07.722 INFO: autodetecting backend as ninja 00:02:07.722 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:02:29.642 CC lib/log/log.o 00:02:29.642 CC lib/ut/ut.o 00:02:29.642 CC lib/log/log_flags.o 00:02:29.642 CC lib/log/log_deprecated.o 00:02:29.642 CC lib/ut_mock/mock.o 00:02:29.642 LIB libspdk_ut.a 00:02:29.642 LIB libspdk_ut_mock.a 00:02:29.642 LIB libspdk_log.a 00:02:29.642 SO libspdk_ut.so.2.0 00:02:29.643 SO libspdk_ut_mock.so.6.0 00:02:29.643 SO libspdk_log.so.7.1 00:02:29.643 SYMLINK libspdk_ut.so 00:02:29.643 SYMLINK libspdk_ut_mock.so 00:02:29.643 SYMLINK libspdk_log.so 00:02:29.643 CC lib/dma/dma.o 00:02:29.643 CC lib/util/base64.o 00:02:29.643 CC lib/util/bit_array.o 00:02:29.643 CC lib/util/cpuset.o 00:02:29.643 CXX lib/trace_parser/trace.o 00:02:29.643 CC lib/ioat/ioat.o 00:02:29.643 CC lib/util/crc16.o 00:02:29.643 CC lib/util/crc32.o 00:02:29.643 CC lib/util/crc32c.o 00:02:29.643 CC lib/util/crc32_ieee.o 00:02:29.643 CC lib/util/crc64.o 00:02:29.643 CC lib/util/dif.o 00:02:29.643 CC lib/util/fd.o 00:02:29.643 CC lib/util/fd_group.o 00:02:29.643 CC lib/util/file.o 00:02:29.643 CC lib/util/hexlify.o 00:02:29.643 CC lib/util/iov.o 00:02:29.643 CC lib/util/math.o 00:02:29.643 CC lib/util/net.o 00:02:29.643 CC lib/util/pipe.o 00:02:29.643 CC lib/util/strerror_tls.o 00:02:29.643 CC lib/util/string.o 00:02:29.643 CC lib/util/uuid.o 00:02:29.643 CC lib/util/xor.o 00:02:29.643 CC lib/util/zipf.o 00:02:29.643 CC lib/util/md5.o 00:02:29.643 CC lib/vfio_user/host/vfio_user_pci.o 00:02:29.643 CC lib/vfio_user/host/vfio_user.o 00:02:29.643 LIB libspdk_dma.a 00:02:29.643 SO libspdk_dma.so.5.0 00:02:29.643 SYMLINK libspdk_dma.so 00:02:29.643 LIB libspdk_ioat.a 00:02:29.643 SO libspdk_ioat.so.7.0 00:02:29.643 SYMLINK libspdk_ioat.so 00:02:29.643 LIB libspdk_vfio_user.a 00:02:29.643 SO libspdk_vfio_user.so.5.0 00:02:29.643 SYMLINK libspdk_vfio_user.so 00:02:29.643 LIB libspdk_util.a 00:02:29.643 SO libspdk_util.so.10.1 00:02:29.946 SYMLINK libspdk_util.so 00:02:29.946 CC lib/json/json_parse.o 00:02:29.946 CC lib/rdma_utils/rdma_utils.o 00:02:29.946 CC lib/idxd/idxd.o 00:02:29.946 CC lib/env_dpdk/env.o 00:02:29.946 CC lib/conf/conf.o 00:02:29.946 CC lib/json/json_util.o 00:02:29.946 CC lib/vmd/vmd.o 00:02:29.946 CC lib/env_dpdk/memory.o 00:02:29.946 CC lib/idxd/idxd_user.o 00:02:29.946 CC lib/json/json_write.o 00:02:29.946 CC lib/vmd/led.o 00:02:29.946 CC lib/env_dpdk/pci.o 00:02:29.946 CC lib/idxd/idxd_kernel.o 00:02:29.946 CC lib/env_dpdk/init.o 00:02:29.946 CC lib/env_dpdk/threads.o 00:02:29.946 CC lib/env_dpdk/pci_ioat.o 00:02:29.946 CC lib/env_dpdk/pci_virtio.o 00:02:29.946 CC lib/env_dpdk/pci_vmd.o 00:02:29.946 CC lib/env_dpdk/pci_idxd.o 00:02:29.946 CC lib/env_dpdk/pci_event.o 00:02:29.946 CC lib/env_dpdk/sigbus_handler.o 00:02:29.946 CC lib/env_dpdk/pci_dpdk.o 00:02:29.946 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:29.946 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:29.946 LIB libspdk_trace_parser.a 00:02:30.218 SO libspdk_trace_parser.so.6.0 00:02:30.218 SYMLINK libspdk_trace_parser.so 00:02:30.218 LIB libspdk_conf.a 00:02:30.218 SO libspdk_conf.so.6.0 00:02:30.476 LIB libspdk_rdma_utils.a 00:02:30.476 LIB libspdk_json.a 00:02:30.476 SO libspdk_rdma_utils.so.1.0 00:02:30.476 SYMLINK libspdk_conf.so 00:02:30.476 SO libspdk_json.so.6.0 00:02:30.476 SYMLINK libspdk_rdma_utils.so 00:02:30.476 SYMLINK libspdk_json.so 00:02:30.476 CC lib/rdma_provider/common.o 00:02:30.476 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:30.476 CC lib/jsonrpc/jsonrpc_server.o 00:02:30.476 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:30.476 CC lib/jsonrpc/jsonrpc_client.o 00:02:30.476 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:30.736 LIB libspdk_idxd.a 00:02:30.736 SO libspdk_idxd.so.12.1 00:02:30.736 LIB libspdk_vmd.a 00:02:30.736 SYMLINK libspdk_idxd.so 00:02:30.736 SO libspdk_vmd.so.6.0 00:02:30.736 SYMLINK libspdk_vmd.so 00:02:30.736 LIB libspdk_rdma_provider.a 00:02:30.736 SO libspdk_rdma_provider.so.7.0 00:02:30.994 LIB libspdk_jsonrpc.a 00:02:30.994 SO libspdk_jsonrpc.so.6.0 00:02:30.994 SYMLINK libspdk_rdma_provider.so 00:02:30.994 SYMLINK libspdk_jsonrpc.so 00:02:31.252 CC lib/rpc/rpc.o 00:02:31.252 LIB libspdk_rpc.a 00:02:31.510 SO libspdk_rpc.so.6.0 00:02:31.510 SYMLINK libspdk_rpc.so 00:02:31.510 CC lib/keyring/keyring.o 00:02:31.510 CC lib/notify/notify.o 00:02:31.510 CC lib/trace/trace.o 00:02:31.510 CC lib/keyring/keyring_rpc.o 00:02:31.510 CC lib/trace/trace_flags.o 00:02:31.510 CC lib/notify/notify_rpc.o 00:02:31.510 CC lib/trace/trace_rpc.o 00:02:31.768 LIB libspdk_notify.a 00:02:31.768 SO libspdk_notify.so.6.0 00:02:31.768 SYMLINK libspdk_notify.so 00:02:31.768 LIB libspdk_keyring.a 00:02:32.026 LIB libspdk_trace.a 00:02:32.026 SO libspdk_keyring.so.2.0 00:02:32.026 SO libspdk_trace.so.11.0 00:02:32.026 SYMLINK libspdk_keyring.so 00:02:32.026 SYMLINK libspdk_trace.so 00:02:32.026 LIB libspdk_env_dpdk.a 00:02:32.026 SO libspdk_env_dpdk.so.15.1 00:02:32.284 CC lib/thread/thread.o 00:02:32.284 CC lib/thread/iobuf.o 00:02:32.284 CC lib/sock/sock.o 00:02:32.284 CC lib/sock/sock_rpc.o 00:02:32.284 SYMLINK libspdk_env_dpdk.so 00:02:32.543 LIB libspdk_sock.a 00:02:32.543 SO libspdk_sock.so.10.0 00:02:32.543 SYMLINK libspdk_sock.so 00:02:32.801 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:32.801 CC lib/nvme/nvme_ctrlr.o 00:02:32.801 CC lib/nvme/nvme_fabric.o 00:02:32.801 CC lib/nvme/nvme_ns_cmd.o 00:02:32.801 CC lib/nvme/nvme_ns.o 00:02:32.801 CC lib/nvme/nvme_pcie_common.o 00:02:32.801 CC lib/nvme/nvme_pcie.o 00:02:32.801 CC lib/nvme/nvme_qpair.o 00:02:32.801 CC lib/nvme/nvme.o 00:02:32.801 CC lib/nvme/nvme_quirks.o 00:02:32.801 CC lib/nvme/nvme_transport.o 00:02:32.801 CC lib/nvme/nvme_discovery.o 00:02:32.801 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:32.801 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:32.801 CC lib/nvme/nvme_tcp.o 00:02:32.801 CC lib/nvme/nvme_opal.o 00:02:32.801 CC lib/nvme/nvme_io_msg.o 00:02:32.801 CC lib/nvme/nvme_poll_group.o 00:02:32.801 CC lib/nvme/nvme_zns.o 00:02:32.801 CC lib/nvme/nvme_stubs.o 00:02:32.801 CC lib/nvme/nvme_auth.o 00:02:32.801 CC lib/nvme/nvme_cuse.o 00:02:32.801 CC lib/nvme/nvme_vfio_user.o 00:02:32.801 CC lib/nvme/nvme_rdma.o 00:02:33.734 LIB libspdk_thread.a 00:02:33.735 SO libspdk_thread.so.11.0 00:02:33.993 SYMLINK libspdk_thread.so 00:02:33.993 CC lib/blob/blobstore.o 00:02:33.993 CC lib/init/json_config.o 00:02:33.993 CC lib/accel/accel.o 00:02:33.993 CC lib/blob/request.o 00:02:33.993 CC lib/virtio/virtio.o 00:02:33.993 CC lib/blob/zeroes.o 00:02:33.993 CC lib/init/subsystem.o 00:02:33.993 CC lib/accel/accel_rpc.o 00:02:33.993 CC lib/virtio/virtio_vhost_user.o 00:02:33.993 CC lib/init/subsystem_rpc.o 00:02:33.993 CC lib/virtio/virtio_vfio_user.o 00:02:33.993 CC lib/accel/accel_sw.o 00:02:33.993 CC lib/blob/blob_bs_dev.o 00:02:33.993 CC lib/vfu_tgt/tgt_endpoint.o 00:02:33.993 CC lib/virtio/virtio_pci.o 00:02:33.993 CC lib/init/rpc.o 00:02:33.993 CC lib/vfu_tgt/tgt_rpc.o 00:02:33.993 CC lib/fsdev/fsdev.o 00:02:33.993 CC lib/fsdev/fsdev_io.o 00:02:33.993 CC lib/fsdev/fsdev_rpc.o 00:02:34.250 LIB libspdk_init.a 00:02:34.250 SO libspdk_init.so.6.0 00:02:34.508 SYMLINK libspdk_init.so 00:02:34.508 LIB libspdk_vfu_tgt.a 00:02:34.508 SO libspdk_vfu_tgt.so.3.0 00:02:34.508 LIB libspdk_virtio.a 00:02:34.508 SYMLINK libspdk_vfu_tgt.so 00:02:34.508 SO libspdk_virtio.so.7.0 00:02:34.508 CC lib/event/app.o 00:02:34.508 SYMLINK libspdk_virtio.so 00:02:34.508 CC lib/event/reactor.o 00:02:34.508 CC lib/event/log_rpc.o 00:02:34.508 CC lib/event/app_rpc.o 00:02:34.508 CC lib/event/scheduler_static.o 00:02:34.765 LIB libspdk_fsdev.a 00:02:34.765 SO libspdk_fsdev.so.2.0 00:02:34.765 SYMLINK libspdk_fsdev.so 00:02:35.023 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:35.023 LIB libspdk_event.a 00:02:35.023 SO libspdk_event.so.14.0 00:02:35.281 SYMLINK libspdk_event.so 00:02:35.281 LIB libspdk_accel.a 00:02:35.281 SO libspdk_accel.so.16.0 00:02:35.281 SYMLINK libspdk_accel.so 00:02:35.281 LIB libspdk_nvme.a 00:02:35.539 SO libspdk_nvme.so.15.0 00:02:35.539 CC lib/bdev/bdev.o 00:02:35.539 CC lib/bdev/bdev_rpc.o 00:02:35.539 CC lib/bdev/bdev_zone.o 00:02:35.539 CC lib/bdev/part.o 00:02:35.539 CC lib/bdev/scsi_nvme.o 00:02:35.797 SYMLINK libspdk_nvme.so 00:02:35.797 LIB libspdk_fuse_dispatcher.a 00:02:35.798 SO libspdk_fuse_dispatcher.so.1.0 00:02:35.798 SYMLINK libspdk_fuse_dispatcher.so 00:02:37.169 LIB libspdk_blob.a 00:02:37.169 SO libspdk_blob.so.12.0 00:02:37.169 SYMLINK libspdk_blob.so 00:02:37.427 CC lib/lvol/lvol.o 00:02:37.427 CC lib/blobfs/blobfs.o 00:02:37.427 CC lib/blobfs/tree.o 00:02:38.368 LIB libspdk_bdev.a 00:02:38.368 SO libspdk_bdev.so.17.0 00:02:38.368 SYMLINK libspdk_bdev.so 00:02:38.368 LIB libspdk_blobfs.a 00:02:38.368 SO libspdk_blobfs.so.11.0 00:02:38.368 SYMLINK libspdk_blobfs.so 00:02:38.368 LIB libspdk_lvol.a 00:02:38.368 SO libspdk_lvol.so.11.0 00:02:38.368 CC lib/nbd/nbd.o 00:02:38.368 CC lib/ublk/ublk.o 00:02:38.368 CC lib/scsi/dev.o 00:02:38.368 CC lib/nbd/nbd_rpc.o 00:02:38.368 CC lib/nvmf/ctrlr.o 00:02:38.368 CC lib/ublk/ublk_rpc.o 00:02:38.368 CC lib/scsi/lun.o 00:02:38.368 CC lib/nvmf/ctrlr_discovery.o 00:02:38.368 CC lib/scsi/port.o 00:02:38.368 CC lib/nvmf/ctrlr_bdev.o 00:02:38.368 CC lib/scsi/scsi.o 00:02:38.368 CC lib/nvmf/subsystem.o 00:02:38.368 CC lib/ftl/ftl_core.o 00:02:38.368 CC lib/scsi/scsi_bdev.o 00:02:38.368 CC lib/nvmf/nvmf.o 00:02:38.368 CC lib/scsi/scsi_pr.o 00:02:38.368 CC lib/ftl/ftl_init.o 00:02:38.368 CC lib/nvmf/nvmf_rpc.o 00:02:38.368 CC lib/scsi/scsi_rpc.o 00:02:38.368 CC lib/ftl/ftl_layout.o 00:02:38.368 CC lib/nvmf/transport.o 00:02:38.368 CC lib/nvmf/tcp.o 00:02:38.368 CC lib/ftl/ftl_debug.o 00:02:38.368 CC lib/scsi/task.o 00:02:38.368 CC lib/ftl/ftl_io.o 00:02:38.368 CC lib/nvmf/stubs.o 00:02:38.368 CC lib/ftl/ftl_sb.o 00:02:38.368 CC lib/nvmf/mdns_server.o 00:02:38.368 CC lib/ftl/ftl_l2p.o 00:02:38.368 CC lib/ftl/ftl_l2p_flat.o 00:02:38.368 CC lib/nvmf/vfio_user.o 00:02:38.368 CC lib/ftl/ftl_nv_cache.o 00:02:38.368 CC lib/nvmf/rdma.o 00:02:38.368 CC lib/nvmf/auth.o 00:02:38.368 CC lib/ftl/ftl_band_ops.o 00:02:38.368 CC lib/ftl/ftl_band.o 00:02:38.368 CC lib/ftl/ftl_writer.o 00:02:38.368 CC lib/ftl/ftl_rq.o 00:02:38.368 CC lib/ftl/ftl_reloc.o 00:02:38.368 CC lib/ftl/ftl_l2p_cache.o 00:02:38.368 CC lib/ftl/ftl_p2l.o 00:02:38.368 CC lib/ftl/ftl_p2l_log.o 00:02:38.368 CC lib/ftl/mngt/ftl_mngt.o 00:02:38.368 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:38.368 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:38.368 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:38.368 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:38.647 SYMLINK libspdk_lvol.so 00:02:38.647 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:38.908 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:38.908 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:38.908 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:38.908 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:38.908 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:38.908 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:38.908 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:38.908 CC lib/ftl/utils/ftl_conf.o 00:02:38.908 CC lib/ftl/utils/ftl_md.o 00:02:38.908 CC lib/ftl/utils/ftl_mempool.o 00:02:38.908 CC lib/ftl/utils/ftl_bitmap.o 00:02:38.908 CC lib/ftl/utils/ftl_property.o 00:02:38.908 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:38.908 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:38.908 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:38.908 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:38.908 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:38.908 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:39.168 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:39.168 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:39.168 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:39.168 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:39.168 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:39.168 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:39.168 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:39.168 CC lib/ftl/base/ftl_base_dev.o 00:02:39.168 CC lib/ftl/base/ftl_base_bdev.o 00:02:39.168 CC lib/ftl/ftl_trace.o 00:02:39.168 LIB libspdk_nbd.a 00:02:39.426 SO libspdk_nbd.so.7.0 00:02:39.426 SYMLINK libspdk_nbd.so 00:02:39.426 LIB libspdk_scsi.a 00:02:39.426 SO libspdk_scsi.so.9.0 00:02:39.684 SYMLINK libspdk_scsi.so 00:02:39.684 LIB libspdk_ublk.a 00:02:39.684 SO libspdk_ublk.so.3.0 00:02:39.684 SYMLINK libspdk_ublk.so 00:02:39.684 CC lib/iscsi/conn.o 00:02:39.684 CC lib/vhost/vhost.o 00:02:39.684 CC lib/iscsi/init_grp.o 00:02:39.684 CC lib/vhost/vhost_rpc.o 00:02:39.684 CC lib/iscsi/iscsi.o 00:02:39.684 CC lib/vhost/vhost_scsi.o 00:02:39.684 CC lib/iscsi/param.o 00:02:39.684 CC lib/iscsi/portal_grp.o 00:02:39.684 CC lib/vhost/vhost_blk.o 00:02:39.684 CC lib/iscsi/tgt_node.o 00:02:39.684 CC lib/vhost/rte_vhost_user.o 00:02:39.684 CC lib/iscsi/iscsi_subsystem.o 00:02:39.684 CC lib/iscsi/iscsi_rpc.o 00:02:39.684 CC lib/iscsi/task.o 00:02:39.942 LIB libspdk_ftl.a 00:02:40.199 SO libspdk_ftl.so.9.0 00:02:40.457 SYMLINK libspdk_ftl.so 00:02:41.023 LIB libspdk_vhost.a 00:02:41.023 SO libspdk_vhost.so.8.0 00:02:41.023 SYMLINK libspdk_vhost.so 00:02:41.280 LIB libspdk_nvmf.a 00:02:41.280 LIB libspdk_iscsi.a 00:02:41.280 SO libspdk_nvmf.so.20.0 00:02:41.280 SO libspdk_iscsi.so.8.0 00:02:41.280 SYMLINK libspdk_iscsi.so 00:02:41.539 SYMLINK libspdk_nvmf.so 00:02:41.796 CC module/env_dpdk/env_dpdk_rpc.o 00:02:41.796 CC module/vfu_device/vfu_virtio.o 00:02:41.796 CC module/vfu_device/vfu_virtio_blk.o 00:02:41.796 CC module/vfu_device/vfu_virtio_scsi.o 00:02:41.796 CC module/vfu_device/vfu_virtio_rpc.o 00:02:41.796 CC module/vfu_device/vfu_virtio_fs.o 00:02:41.796 CC module/accel/dsa/accel_dsa.o 00:02:41.796 CC module/sock/posix/posix.o 00:02:41.796 CC module/accel/ioat/accel_ioat.o 00:02:41.796 CC module/accel/dsa/accel_dsa_rpc.o 00:02:41.796 CC module/accel/ioat/accel_ioat_rpc.o 00:02:41.796 CC module/blob/bdev/blob_bdev.o 00:02:41.796 CC module/scheduler/gscheduler/gscheduler.o 00:02:41.796 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:41.796 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:41.796 CC module/fsdev/aio/fsdev_aio.o 00:02:41.796 CC module/accel/error/accel_error.o 00:02:41.796 CC module/keyring/file/keyring.o 00:02:41.796 CC module/keyring/linux/keyring.o 00:02:41.796 CC module/accel/iaa/accel_iaa_rpc.o 00:02:41.796 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:41.796 CC module/accel/iaa/accel_iaa.o 00:02:41.796 CC module/keyring/linux/keyring_rpc.o 00:02:41.796 CC module/accel/error/accel_error_rpc.o 00:02:41.796 CC module/keyring/file/keyring_rpc.o 00:02:41.796 CC module/fsdev/aio/linux_aio_mgr.o 00:02:41.796 LIB libspdk_env_dpdk_rpc.a 00:02:41.796 SO libspdk_env_dpdk_rpc.so.6.0 00:02:42.054 SYMLINK libspdk_env_dpdk_rpc.so 00:02:42.054 LIB libspdk_keyring_linux.a 00:02:42.054 LIB libspdk_scheduler_gscheduler.a 00:02:42.054 LIB libspdk_scheduler_dpdk_governor.a 00:02:42.054 SO libspdk_keyring_linux.so.1.0 00:02:42.054 SO libspdk_scheduler_gscheduler.so.4.0 00:02:42.054 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:42.054 LIB libspdk_accel_ioat.a 00:02:42.054 LIB libspdk_scheduler_dynamic.a 00:02:42.054 LIB libspdk_accel_error.a 00:02:42.054 SO libspdk_accel_ioat.so.6.0 00:02:42.054 SYMLINK libspdk_scheduler_gscheduler.so 00:02:42.054 SYMLINK libspdk_keyring_linux.so 00:02:42.054 SO libspdk_scheduler_dynamic.so.4.0 00:02:42.054 SO libspdk_accel_error.so.2.0 00:02:42.054 LIB libspdk_keyring_file.a 00:02:42.054 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:42.054 SO libspdk_keyring_file.so.2.0 00:02:42.054 SYMLINK libspdk_scheduler_dynamic.so 00:02:42.054 SYMLINK libspdk_accel_ioat.so 00:02:42.054 SYMLINK libspdk_accel_error.so 00:02:42.054 LIB libspdk_accel_dsa.a 00:02:42.054 LIB libspdk_accel_iaa.a 00:02:42.054 SYMLINK libspdk_keyring_file.so 00:02:42.312 SO libspdk_accel_dsa.so.5.0 00:02:42.312 SO libspdk_accel_iaa.so.3.0 00:02:42.312 LIB libspdk_blob_bdev.a 00:02:42.312 SO libspdk_blob_bdev.so.12.0 00:02:42.312 SYMLINK libspdk_accel_iaa.so 00:02:42.312 SYMLINK libspdk_accel_dsa.so 00:02:42.312 SYMLINK libspdk_blob_bdev.so 00:02:42.312 LIB libspdk_vfu_device.a 00:02:42.570 SO libspdk_vfu_device.so.3.0 00:02:42.570 SYMLINK libspdk_vfu_device.so 00:02:42.570 CC module/bdev/error/vbdev_error.o 00:02:42.570 CC module/bdev/gpt/gpt.o 00:02:42.570 CC module/bdev/delay/vbdev_delay.o 00:02:42.570 CC module/bdev/raid/bdev_raid.o 00:02:42.570 CC module/bdev/error/vbdev_error_rpc.o 00:02:42.570 CC module/bdev/split/vbdev_split.o 00:02:42.570 CC module/bdev/raid/bdev_raid_rpc.o 00:02:42.570 CC module/bdev/gpt/vbdev_gpt.o 00:02:42.570 CC module/bdev/aio/bdev_aio.o 00:02:42.570 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:42.570 CC module/bdev/raid/bdev_raid_sb.o 00:02:42.570 CC module/bdev/ftl/bdev_ftl.o 00:02:42.570 CC module/bdev/aio/bdev_aio_rpc.o 00:02:42.570 CC module/bdev/split/vbdev_split_rpc.o 00:02:42.570 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:42.570 CC module/bdev/raid/raid0.o 00:02:42.570 CC module/bdev/lvol/vbdev_lvol.o 00:02:42.570 CC module/bdev/raid/raid1.o 00:02:42.570 CC module/blobfs/bdev/blobfs_bdev.o 00:02:42.570 CC module/bdev/raid/concat.o 00:02:42.570 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:42.570 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:42.570 CC module/bdev/nvme/bdev_nvme.o 00:02:42.570 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:42.570 CC module/bdev/null/bdev_null.o 00:02:42.570 CC module/bdev/malloc/bdev_malloc.o 00:02:42.570 CC module/bdev/null/bdev_null_rpc.o 00:02:42.570 CC module/bdev/nvme/nvme_rpc.o 00:02:42.570 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:42.570 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:42.570 CC module/bdev/nvme/bdev_mdns_client.o 00:02:42.570 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:42.570 CC module/bdev/nvme/vbdev_opal.o 00:02:42.570 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:42.570 CC module/bdev/passthru/vbdev_passthru.o 00:02:42.570 CC module/bdev/iscsi/bdev_iscsi.o 00:02:42.570 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:42.570 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:42.570 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:42.570 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:42.570 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:42.570 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:42.570 LIB libspdk_fsdev_aio.a 00:02:42.570 SO libspdk_fsdev_aio.so.1.0 00:02:42.828 LIB libspdk_sock_posix.a 00:02:42.828 SYMLINK libspdk_fsdev_aio.so 00:02:42.828 SO libspdk_sock_posix.so.6.0 00:02:42.828 SYMLINK libspdk_sock_posix.so 00:02:43.085 LIB libspdk_bdev_gpt.a 00:02:43.085 LIB libspdk_blobfs_bdev.a 00:02:43.085 SO libspdk_blobfs_bdev.so.6.0 00:02:43.085 SO libspdk_bdev_gpt.so.6.0 00:02:43.085 LIB libspdk_bdev_error.a 00:02:43.085 LIB libspdk_bdev_split.a 00:02:43.085 SO libspdk_bdev_error.so.6.0 00:02:43.085 SYMLINK libspdk_bdev_gpt.so 00:02:43.085 SYMLINK libspdk_blobfs_bdev.so 00:02:43.085 SO libspdk_bdev_split.so.6.0 00:02:43.085 LIB libspdk_bdev_ftl.a 00:02:43.085 LIB libspdk_bdev_null.a 00:02:43.085 SYMLINK libspdk_bdev_error.so 00:02:43.085 LIB libspdk_bdev_passthru.a 00:02:43.085 SO libspdk_bdev_ftl.so.6.0 00:02:43.085 SO libspdk_bdev_null.so.6.0 00:02:43.085 LIB libspdk_bdev_aio.a 00:02:43.085 SYMLINK libspdk_bdev_split.so 00:02:43.085 SO libspdk_bdev_passthru.so.6.0 00:02:43.085 SO libspdk_bdev_aio.so.6.0 00:02:43.085 LIB libspdk_bdev_zone_block.a 00:02:43.085 SYMLINK libspdk_bdev_ftl.so 00:02:43.085 SYMLINK libspdk_bdev_null.so 00:02:43.085 SYMLINK libspdk_bdev_passthru.so 00:02:43.085 SO libspdk_bdev_zone_block.so.6.0 00:02:43.085 LIB libspdk_bdev_malloc.a 00:02:43.085 LIB libspdk_bdev_iscsi.a 00:02:43.085 SYMLINK libspdk_bdev_aio.so 00:02:43.085 LIB libspdk_bdev_delay.a 00:02:43.085 SO libspdk_bdev_malloc.so.6.0 00:02:43.085 SO libspdk_bdev_iscsi.so.6.0 00:02:43.342 SO libspdk_bdev_delay.so.6.0 00:02:43.342 SYMLINK libspdk_bdev_zone_block.so 00:02:43.342 SYMLINK libspdk_bdev_malloc.so 00:02:43.342 SYMLINK libspdk_bdev_iscsi.so 00:02:43.342 SYMLINK libspdk_bdev_delay.so 00:02:43.342 LIB libspdk_bdev_virtio.a 00:02:43.342 LIB libspdk_bdev_lvol.a 00:02:43.342 SO libspdk_bdev_virtio.so.6.0 00:02:43.342 SO libspdk_bdev_lvol.so.6.0 00:02:43.342 SYMLINK libspdk_bdev_virtio.so 00:02:43.342 SYMLINK libspdk_bdev_lvol.so 00:02:43.907 LIB libspdk_bdev_raid.a 00:02:43.907 SO libspdk_bdev_raid.so.6.0 00:02:43.907 SYMLINK libspdk_bdev_raid.so 00:02:45.282 LIB libspdk_bdev_nvme.a 00:02:45.282 SO libspdk_bdev_nvme.so.7.1 00:02:45.540 SYMLINK libspdk_bdev_nvme.so 00:02:45.798 CC module/event/subsystems/keyring/keyring.o 00:02:45.798 CC module/event/subsystems/iobuf/iobuf.o 00:02:45.798 CC module/event/subsystems/vmd/vmd.o 00:02:45.798 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:45.798 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:45.798 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:45.798 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:45.798 CC module/event/subsystems/sock/sock.o 00:02:45.798 CC module/event/subsystems/fsdev/fsdev.o 00:02:45.798 CC module/event/subsystems/scheduler/scheduler.o 00:02:46.057 LIB libspdk_event_keyring.a 00:02:46.057 LIB libspdk_event_vhost_blk.a 00:02:46.057 LIB libspdk_event_vfu_tgt.a 00:02:46.057 LIB libspdk_event_fsdev.a 00:02:46.057 LIB libspdk_event_vmd.a 00:02:46.057 LIB libspdk_event_scheduler.a 00:02:46.057 LIB libspdk_event_sock.a 00:02:46.057 SO libspdk_event_vhost_blk.so.3.0 00:02:46.057 SO libspdk_event_keyring.so.1.0 00:02:46.057 SO libspdk_event_vfu_tgt.so.3.0 00:02:46.057 SO libspdk_event_fsdev.so.1.0 00:02:46.057 LIB libspdk_event_iobuf.a 00:02:46.057 SO libspdk_event_scheduler.so.4.0 00:02:46.057 SO libspdk_event_sock.so.5.0 00:02:46.057 SO libspdk_event_vmd.so.6.0 00:02:46.057 SO libspdk_event_iobuf.so.3.0 00:02:46.057 SYMLINK libspdk_event_keyring.so 00:02:46.057 SYMLINK libspdk_event_vhost_blk.so 00:02:46.057 SYMLINK libspdk_event_fsdev.so 00:02:46.057 SYMLINK libspdk_event_vfu_tgt.so 00:02:46.057 SYMLINK libspdk_event_sock.so 00:02:46.057 SYMLINK libspdk_event_scheduler.so 00:02:46.057 SYMLINK libspdk_event_vmd.so 00:02:46.057 SYMLINK libspdk_event_iobuf.so 00:02:46.316 CC module/event/subsystems/accel/accel.o 00:02:46.316 LIB libspdk_event_accel.a 00:02:46.573 SO libspdk_event_accel.so.6.0 00:02:46.573 SYMLINK libspdk_event_accel.so 00:02:46.832 CC module/event/subsystems/bdev/bdev.o 00:02:46.832 LIB libspdk_event_bdev.a 00:02:46.832 SO libspdk_event_bdev.so.6.0 00:02:47.091 SYMLINK libspdk_event_bdev.so 00:02:47.091 CC module/event/subsystems/scsi/scsi.o 00:02:47.091 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:47.091 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:47.091 CC module/event/subsystems/nbd/nbd.o 00:02:47.091 CC module/event/subsystems/ublk/ublk.o 00:02:47.349 LIB libspdk_event_ublk.a 00:02:47.349 LIB libspdk_event_nbd.a 00:02:47.349 SO libspdk_event_ublk.so.3.0 00:02:47.349 LIB libspdk_event_scsi.a 00:02:47.349 SO libspdk_event_nbd.so.6.0 00:02:47.349 SO libspdk_event_scsi.so.6.0 00:02:47.349 SYMLINK libspdk_event_ublk.so 00:02:47.349 SYMLINK libspdk_event_nbd.so 00:02:47.349 SYMLINK libspdk_event_scsi.so 00:02:47.349 LIB libspdk_event_nvmf.a 00:02:47.349 SO libspdk_event_nvmf.so.6.0 00:02:47.349 SYMLINK libspdk_event_nvmf.so 00:02:47.607 CC module/event/subsystems/iscsi/iscsi.o 00:02:47.607 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:47.607 LIB libspdk_event_vhost_scsi.a 00:02:47.865 SO libspdk_event_vhost_scsi.so.3.0 00:02:47.865 LIB libspdk_event_iscsi.a 00:02:47.865 SO libspdk_event_iscsi.so.6.0 00:02:47.865 SYMLINK libspdk_event_vhost_scsi.so 00:02:47.865 SYMLINK libspdk_event_iscsi.so 00:02:47.865 SO libspdk.so.6.0 00:02:47.865 SYMLINK libspdk.so 00:02:48.129 CC app/trace_record/trace_record.o 00:02:48.129 CXX app/trace/trace.o 00:02:48.129 CC app/spdk_nvme_discover/discovery_aer.o 00:02:48.129 CC app/spdk_nvme_perf/perf.o 00:02:48.129 CC app/spdk_lspci/spdk_lspci.o 00:02:48.129 CC app/spdk_nvme_identify/identify.o 00:02:48.129 CC app/spdk_top/spdk_top.o 00:02:48.129 CC test/rpc_client/rpc_client_test.o 00:02:48.129 TEST_HEADER include/spdk/accel.h 00:02:48.129 TEST_HEADER include/spdk/accel_module.h 00:02:48.129 TEST_HEADER include/spdk/assert.h 00:02:48.129 TEST_HEADER include/spdk/barrier.h 00:02:48.129 TEST_HEADER include/spdk/base64.h 00:02:48.129 TEST_HEADER include/spdk/bdev.h 00:02:48.129 TEST_HEADER include/spdk/bdev_module.h 00:02:48.129 TEST_HEADER include/spdk/bdev_zone.h 00:02:48.129 TEST_HEADER include/spdk/bit_array.h 00:02:48.129 TEST_HEADER include/spdk/blob_bdev.h 00:02:48.129 TEST_HEADER include/spdk/bit_pool.h 00:02:48.129 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:48.129 TEST_HEADER include/spdk/blobfs.h 00:02:48.129 TEST_HEADER include/spdk/blob.h 00:02:48.129 TEST_HEADER include/spdk/conf.h 00:02:48.129 TEST_HEADER include/spdk/config.h 00:02:48.129 TEST_HEADER include/spdk/cpuset.h 00:02:48.129 TEST_HEADER include/spdk/crc16.h 00:02:48.129 TEST_HEADER include/spdk/crc32.h 00:02:48.129 TEST_HEADER include/spdk/crc64.h 00:02:48.129 TEST_HEADER include/spdk/dif.h 00:02:48.129 TEST_HEADER include/spdk/dma.h 00:02:48.129 TEST_HEADER include/spdk/endian.h 00:02:48.129 TEST_HEADER include/spdk/env.h 00:02:48.129 TEST_HEADER include/spdk/env_dpdk.h 00:02:48.129 TEST_HEADER include/spdk/event.h 00:02:48.129 TEST_HEADER include/spdk/fd_group.h 00:02:48.129 TEST_HEADER include/spdk/fd.h 00:02:48.129 TEST_HEADER include/spdk/file.h 00:02:48.129 TEST_HEADER include/spdk/fsdev_module.h 00:02:48.129 TEST_HEADER include/spdk/fsdev.h 00:02:48.129 TEST_HEADER include/spdk/ftl.h 00:02:48.129 TEST_HEADER include/spdk/gpt_spec.h 00:02:48.129 TEST_HEADER include/spdk/hexlify.h 00:02:48.129 TEST_HEADER include/spdk/histogram_data.h 00:02:48.129 TEST_HEADER include/spdk/idxd.h 00:02:48.129 TEST_HEADER include/spdk/init.h 00:02:48.129 TEST_HEADER include/spdk/idxd_spec.h 00:02:48.129 TEST_HEADER include/spdk/ioat.h 00:02:48.129 TEST_HEADER include/spdk/ioat_spec.h 00:02:48.129 TEST_HEADER include/spdk/iscsi_spec.h 00:02:48.129 TEST_HEADER include/spdk/json.h 00:02:48.129 TEST_HEADER include/spdk/jsonrpc.h 00:02:48.129 TEST_HEADER include/spdk/keyring.h 00:02:48.129 TEST_HEADER include/spdk/keyring_module.h 00:02:48.129 TEST_HEADER include/spdk/log.h 00:02:48.129 TEST_HEADER include/spdk/likely.h 00:02:48.129 TEST_HEADER include/spdk/lvol.h 00:02:48.129 TEST_HEADER include/spdk/memory.h 00:02:48.129 TEST_HEADER include/spdk/md5.h 00:02:48.129 TEST_HEADER include/spdk/mmio.h 00:02:48.129 TEST_HEADER include/spdk/nbd.h 00:02:48.129 TEST_HEADER include/spdk/net.h 00:02:48.129 TEST_HEADER include/spdk/notify.h 00:02:48.129 TEST_HEADER include/spdk/nvme.h 00:02:48.129 TEST_HEADER include/spdk/nvme_intel.h 00:02:48.129 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:48.129 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:48.129 TEST_HEADER include/spdk/nvme_spec.h 00:02:48.129 TEST_HEADER include/spdk/nvme_zns.h 00:02:48.129 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:48.129 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:48.129 TEST_HEADER include/spdk/nvmf.h 00:02:48.129 TEST_HEADER include/spdk/nvmf_spec.h 00:02:48.129 TEST_HEADER include/spdk/nvmf_transport.h 00:02:48.129 TEST_HEADER include/spdk/opal.h 00:02:48.129 TEST_HEADER include/spdk/opal_spec.h 00:02:48.129 TEST_HEADER include/spdk/pci_ids.h 00:02:48.129 TEST_HEADER include/spdk/pipe.h 00:02:48.129 TEST_HEADER include/spdk/queue.h 00:02:48.129 TEST_HEADER include/spdk/reduce.h 00:02:48.129 TEST_HEADER include/spdk/rpc.h 00:02:48.129 TEST_HEADER include/spdk/scheduler.h 00:02:48.129 TEST_HEADER include/spdk/scsi.h 00:02:48.129 TEST_HEADER include/spdk/scsi_spec.h 00:02:48.129 TEST_HEADER include/spdk/sock.h 00:02:48.129 TEST_HEADER include/spdk/stdinc.h 00:02:48.129 TEST_HEADER include/spdk/string.h 00:02:48.129 TEST_HEADER include/spdk/thread.h 00:02:48.129 TEST_HEADER include/spdk/trace.h 00:02:48.129 TEST_HEADER include/spdk/trace_parser.h 00:02:48.129 TEST_HEADER include/spdk/tree.h 00:02:48.129 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:48.129 TEST_HEADER include/spdk/util.h 00:02:48.129 TEST_HEADER include/spdk/ublk.h 00:02:48.129 CC app/spdk_dd/spdk_dd.o 00:02:48.129 TEST_HEADER include/spdk/uuid.h 00:02:48.129 TEST_HEADER include/spdk/version.h 00:02:48.129 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:48.129 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:48.129 TEST_HEADER include/spdk/vhost.h 00:02:48.129 CC app/iscsi_tgt/iscsi_tgt.o 00:02:48.129 TEST_HEADER include/spdk/vmd.h 00:02:48.129 TEST_HEADER include/spdk/xor.h 00:02:48.129 TEST_HEADER include/spdk/zipf.h 00:02:48.129 CXX test/cpp_headers/accel.o 00:02:48.129 CXX test/cpp_headers/accel_module.o 00:02:48.129 CXX test/cpp_headers/assert.o 00:02:48.129 CXX test/cpp_headers/barrier.o 00:02:48.129 CXX test/cpp_headers/base64.o 00:02:48.129 CXX test/cpp_headers/bdev.o 00:02:48.129 CXX test/cpp_headers/bdev_module.o 00:02:48.129 CXX test/cpp_headers/bdev_zone.o 00:02:48.129 CXX test/cpp_headers/bit_array.o 00:02:48.129 CXX test/cpp_headers/bit_pool.o 00:02:48.129 CXX test/cpp_headers/blob_bdev.o 00:02:48.129 CXX test/cpp_headers/blobfs_bdev.o 00:02:48.129 CXX test/cpp_headers/blobfs.o 00:02:48.129 CXX test/cpp_headers/blob.o 00:02:48.129 CXX test/cpp_headers/conf.o 00:02:48.129 CXX test/cpp_headers/config.o 00:02:48.129 CXX test/cpp_headers/cpuset.o 00:02:48.129 CXX test/cpp_headers/crc16.o 00:02:48.129 CC app/nvmf_tgt/nvmf_main.o 00:02:48.393 CXX test/cpp_headers/crc32.o 00:02:48.393 CC app/spdk_tgt/spdk_tgt.o 00:02:48.393 CC examples/util/zipf/zipf.o 00:02:48.393 CC examples/ioat/perf/perf.o 00:02:48.393 CC test/app/histogram_perf/histogram_perf.o 00:02:48.393 CC examples/ioat/verify/verify.o 00:02:48.393 CC test/app/jsoncat/jsoncat.o 00:02:48.393 CC test/env/vtophys/vtophys.o 00:02:48.393 CC test/env/memory/memory_ut.o 00:02:48.393 CC test/thread/poller_perf/poller_perf.o 00:02:48.393 CC test/app/stub/stub.o 00:02:48.393 CC test/env/pci/pci_ut.o 00:02:48.393 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:48.393 CC app/fio/nvme/fio_plugin.o 00:02:48.393 CC test/app/bdev_svc/bdev_svc.o 00:02:48.393 CC test/dma/test_dma/test_dma.o 00:02:48.393 CC app/fio/bdev/fio_plugin.o 00:02:48.393 LINK spdk_lspci 00:02:48.393 CC test/env/mem_callbacks/mem_callbacks.o 00:02:48.393 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:48.656 LINK rpc_client_test 00:02:48.656 LINK spdk_nvme_discover 00:02:48.656 LINK jsoncat 00:02:48.656 LINK spdk_trace_record 00:02:48.656 LINK histogram_perf 00:02:48.656 LINK interrupt_tgt 00:02:48.656 LINK vtophys 00:02:48.656 CXX test/cpp_headers/crc64.o 00:02:48.656 LINK zipf 00:02:48.656 LINK poller_perf 00:02:48.656 CXX test/cpp_headers/dif.o 00:02:48.656 LINK iscsi_tgt 00:02:48.656 CXX test/cpp_headers/dma.o 00:02:48.656 CXX test/cpp_headers/endian.o 00:02:48.656 CXX test/cpp_headers/env_dpdk.o 00:02:48.656 CXX test/cpp_headers/env.o 00:02:48.656 CXX test/cpp_headers/event.o 00:02:48.656 CXX test/cpp_headers/fd_group.o 00:02:48.656 LINK nvmf_tgt 00:02:48.656 CXX test/cpp_headers/fd.o 00:02:48.656 CXX test/cpp_headers/file.o 00:02:48.656 LINK env_dpdk_post_init 00:02:48.656 LINK stub 00:02:48.656 CXX test/cpp_headers/fsdev.o 00:02:48.656 CXX test/cpp_headers/fsdev_module.o 00:02:48.656 CXX test/cpp_headers/ftl.o 00:02:48.936 LINK bdev_svc 00:02:48.936 CXX test/cpp_headers/gpt_spec.o 00:02:48.936 LINK ioat_perf 00:02:48.936 CXX test/cpp_headers/hexlify.o 00:02:48.936 CXX test/cpp_headers/histogram_data.o 00:02:48.936 CXX test/cpp_headers/idxd.o 00:02:48.936 LINK verify 00:02:48.936 LINK spdk_tgt 00:02:48.936 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:48.936 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:48.936 CXX test/cpp_headers/idxd_spec.o 00:02:48.936 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:48.936 CXX test/cpp_headers/init.o 00:02:48.936 CXX test/cpp_headers/ioat.o 00:02:48.936 CXX test/cpp_headers/ioat_spec.o 00:02:48.936 LINK spdk_dd 00:02:48.936 CXX test/cpp_headers/iscsi_spec.o 00:02:49.206 LINK spdk_trace 00:02:49.206 CXX test/cpp_headers/json.o 00:02:49.206 CXX test/cpp_headers/jsonrpc.o 00:02:49.206 CXX test/cpp_headers/keyring.o 00:02:49.206 CXX test/cpp_headers/keyring_module.o 00:02:49.206 CXX test/cpp_headers/likely.o 00:02:49.206 CXX test/cpp_headers/log.o 00:02:49.206 CXX test/cpp_headers/lvol.o 00:02:49.206 CXX test/cpp_headers/md5.o 00:02:49.207 CXX test/cpp_headers/memory.o 00:02:49.207 CXX test/cpp_headers/mmio.o 00:02:49.207 CXX test/cpp_headers/nbd.o 00:02:49.207 CXX test/cpp_headers/net.o 00:02:49.207 CXX test/cpp_headers/notify.o 00:02:49.207 CXX test/cpp_headers/nvme.o 00:02:49.207 CXX test/cpp_headers/nvme_intel.o 00:02:49.207 LINK pci_ut 00:02:49.207 CXX test/cpp_headers/nvme_ocssd.o 00:02:49.207 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:49.207 CXX test/cpp_headers/nvme_spec.o 00:02:49.207 CXX test/cpp_headers/nvme_zns.o 00:02:49.207 CXX test/cpp_headers/nvmf_cmd.o 00:02:49.207 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:49.207 CXX test/cpp_headers/nvmf.o 00:02:49.207 CXX test/cpp_headers/nvmf_spec.o 00:02:49.207 CXX test/cpp_headers/nvmf_transport.o 00:02:49.470 CXX test/cpp_headers/opal.o 00:02:49.470 LINK nvme_fuzz 00:02:49.470 CC examples/vmd/lsvmd/lsvmd.o 00:02:49.470 CXX test/cpp_headers/opal_spec.o 00:02:49.470 CC examples/sock/hello_world/hello_sock.o 00:02:49.470 LINK spdk_nvme 00:02:49.470 LINK spdk_bdev 00:02:49.470 CC examples/vmd/led/led.o 00:02:49.470 LINK test_dma 00:02:49.470 CC test/event/reactor/reactor.o 00:02:49.470 CC test/event/event_perf/event_perf.o 00:02:49.470 CC examples/thread/thread/thread_ex.o 00:02:49.470 CC examples/idxd/perf/perf.o 00:02:49.470 CXX test/cpp_headers/pci_ids.o 00:02:49.470 CC test/event/reactor_perf/reactor_perf.o 00:02:49.470 CXX test/cpp_headers/pipe.o 00:02:49.470 CC test/event/app_repeat/app_repeat.o 00:02:49.470 CXX test/cpp_headers/queue.o 00:02:49.470 CXX test/cpp_headers/reduce.o 00:02:49.470 CXX test/cpp_headers/rpc.o 00:02:49.470 CXX test/cpp_headers/scheduler.o 00:02:49.470 CXX test/cpp_headers/scsi.o 00:02:49.470 CXX test/cpp_headers/scsi_spec.o 00:02:49.470 CXX test/cpp_headers/sock.o 00:02:49.470 CXX test/cpp_headers/stdinc.o 00:02:49.731 CC test/event/scheduler/scheduler.o 00:02:49.731 CXX test/cpp_headers/string.o 00:02:49.731 CXX test/cpp_headers/thread.o 00:02:49.731 CXX test/cpp_headers/trace.o 00:02:49.731 CXX test/cpp_headers/trace_parser.o 00:02:49.731 CXX test/cpp_headers/tree.o 00:02:49.731 CXX test/cpp_headers/ublk.o 00:02:49.731 CC app/vhost/vhost.o 00:02:49.731 CXX test/cpp_headers/util.o 00:02:49.731 CXX test/cpp_headers/uuid.o 00:02:49.731 CXX test/cpp_headers/version.o 00:02:49.731 CXX test/cpp_headers/vfio_user_pci.o 00:02:49.731 CXX test/cpp_headers/vfio_user_spec.o 00:02:49.731 LINK lsvmd 00:02:49.731 CXX test/cpp_headers/vhost.o 00:02:49.731 LINK spdk_nvme_perf 00:02:49.731 CXX test/cpp_headers/vmd.o 00:02:49.731 CXX test/cpp_headers/xor.o 00:02:49.731 CXX test/cpp_headers/zipf.o 00:02:49.731 LINK led 00:02:49.731 LINK mem_callbacks 00:02:49.731 LINK reactor 00:02:49.731 LINK spdk_nvme_identify 00:02:49.731 LINK event_perf 00:02:49.731 LINK reactor_perf 00:02:49.731 LINK vhost_fuzz 00:02:49.731 LINK hello_sock 00:02:49.989 LINK spdk_top 00:02:49.989 LINK app_repeat 00:02:49.989 LINK thread 00:02:49.989 LINK idxd_perf 00:02:49.989 CC test/nvme/e2edp/nvme_dp.o 00:02:49.989 LINK vhost 00:02:49.989 CC test/nvme/sgl/sgl.o 00:02:49.989 CC test/nvme/startup/startup.o 00:02:49.989 LINK scheduler 00:02:49.989 CC test/nvme/reset/reset.o 00:02:50.248 CC test/nvme/overhead/overhead.o 00:02:50.248 CC test/nvme/err_injection/err_injection.o 00:02:50.248 CC test/nvme/aer/aer.o 00:02:50.248 CC test/nvme/reserve/reserve.o 00:02:50.248 CC test/nvme/simple_copy/simple_copy.o 00:02:50.248 CC test/nvme/connect_stress/connect_stress.o 00:02:50.248 CC test/nvme/boot_partition/boot_partition.o 00:02:50.248 CC test/nvme/fused_ordering/fused_ordering.o 00:02:50.248 CC test/nvme/compliance/nvme_compliance.o 00:02:50.248 CC test/nvme/fdp/fdp.o 00:02:50.248 CC test/nvme/cuse/cuse.o 00:02:50.248 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:50.248 CC test/blobfs/mkfs/mkfs.o 00:02:50.248 CC test/accel/dif/dif.o 00:02:50.248 CC test/lvol/esnap/esnap.o 00:02:50.248 CC examples/nvme/arbitration/arbitration.o 00:02:50.248 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:50.248 CC examples/nvme/reconnect/reconnect.o 00:02:50.248 LINK startup 00:02:50.248 CC examples/nvme/hello_world/hello_world.o 00:02:50.248 CC examples/nvme/abort/abort.o 00:02:50.248 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:50.248 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:50.248 CC examples/nvme/hotplug/hotplug.o 00:02:50.507 LINK boot_partition 00:02:50.507 LINK err_injection 00:02:50.507 LINK connect_stress 00:02:50.507 LINK doorbell_aers 00:02:50.507 LINK fused_ordering 00:02:50.507 LINK reserve 00:02:50.507 LINK simple_copy 00:02:50.507 LINK nvme_dp 00:02:50.507 LINK sgl 00:02:50.507 LINK reset 00:02:50.507 LINK memory_ut 00:02:50.507 LINK overhead 00:02:50.507 CC examples/accel/perf/accel_perf.o 00:02:50.507 LINK mkfs 00:02:50.507 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:50.507 LINK cmb_copy 00:02:50.507 CC examples/blob/cli/blobcli.o 00:02:50.507 CC examples/blob/hello_world/hello_blob.o 00:02:50.765 LINK aer 00:02:50.765 LINK pmr_persistence 00:02:50.765 LINK nvme_compliance 00:02:50.765 LINK fdp 00:02:50.765 LINK hello_world 00:02:50.765 LINK hotplug 00:02:50.765 LINK reconnect 00:02:50.765 LINK abort 00:02:51.022 LINK arbitration 00:02:51.022 LINK hello_blob 00:02:51.022 LINK hello_fsdev 00:02:51.022 LINK nvme_manage 00:02:51.022 LINK dif 00:02:51.022 LINK accel_perf 00:02:51.280 LINK blobcli 00:02:51.538 LINK iscsi_fuzz 00:02:51.538 CC examples/bdev/bdevperf/bdevperf.o 00:02:51.538 CC examples/bdev/hello_world/hello_bdev.o 00:02:51.538 CC test/bdev/bdevio/bdevio.o 00:02:51.795 LINK hello_bdev 00:02:51.795 LINK cuse 00:02:51.795 LINK bdevio 00:02:52.360 LINK bdevperf 00:02:52.617 CC examples/nvmf/nvmf/nvmf.o 00:02:52.874 LINK nvmf 00:02:55.405 LINK esnap 00:02:55.664 00:02:55.664 real 1m12.025s 00:02:55.664 user 11m52.615s 00:02:55.664 sys 2m41.819s 00:02:55.664 14:39:38 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:55.664 14:39:38 make -- common/autotest_common.sh@10 -- $ set +x 00:02:55.664 ************************************ 00:02:55.664 END TEST make 00:02:55.664 ************************************ 00:02:55.922 14:39:38 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:55.922 14:39:38 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:55.922 14:39:38 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:55.922 14:39:38 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:55.922 14:39:38 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:55.922 14:39:38 -- pm/common@44 -- $ pid=469019 00:02:55.922 14:39:38 -- pm/common@50 -- $ kill -TERM 469019 00:02:55.922 14:39:38 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:55.922 14:39:38 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:55.922 14:39:38 -- pm/common@44 -- $ pid=469021 00:02:55.922 14:39:38 -- pm/common@50 -- $ kill -TERM 469021 00:02:55.922 14:39:38 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:55.922 14:39:38 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:55.922 14:39:38 -- pm/common@44 -- $ pid=469022 00:02:55.922 14:39:38 -- pm/common@50 -- $ kill -TERM 469022 00:02:55.922 14:39:38 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:55.922 14:39:38 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:55.922 14:39:38 -- pm/common@44 -- $ pid=469054 00:02:55.922 14:39:38 -- pm/common@50 -- $ sudo -E kill -TERM 469054 00:02:55.922 14:39:38 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:02:55.922 14:39:38 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:55.922 14:39:38 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:02:55.922 14:39:38 -- common/autotest_common.sh@1711 -- # lcov --version 00:02:55.922 14:39:38 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:02:55.922 14:39:38 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:02:55.922 14:39:38 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:55.922 14:39:38 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:55.922 14:39:38 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:55.922 14:39:38 -- scripts/common.sh@336 -- # IFS=.-: 00:02:55.922 14:39:38 -- scripts/common.sh@336 -- # read -ra ver1 00:02:55.923 14:39:38 -- scripts/common.sh@337 -- # IFS=.-: 00:02:55.923 14:39:38 -- scripts/common.sh@337 -- # read -ra ver2 00:02:55.923 14:39:38 -- scripts/common.sh@338 -- # local 'op=<' 00:02:55.923 14:39:38 -- scripts/common.sh@340 -- # ver1_l=2 00:02:55.923 14:39:38 -- scripts/common.sh@341 -- # ver2_l=1 00:02:55.923 14:39:38 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:55.923 14:39:38 -- scripts/common.sh@344 -- # case "$op" in 00:02:55.923 14:39:38 -- scripts/common.sh@345 -- # : 1 00:02:55.923 14:39:38 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:55.923 14:39:38 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:55.923 14:39:38 -- scripts/common.sh@365 -- # decimal 1 00:02:55.923 14:39:38 -- scripts/common.sh@353 -- # local d=1 00:02:55.923 14:39:38 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:55.923 14:39:38 -- scripts/common.sh@355 -- # echo 1 00:02:55.923 14:39:38 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:55.923 14:39:38 -- scripts/common.sh@366 -- # decimal 2 00:02:55.923 14:39:38 -- scripts/common.sh@353 -- # local d=2 00:02:55.923 14:39:38 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:55.923 14:39:38 -- scripts/common.sh@355 -- # echo 2 00:02:55.923 14:39:38 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:55.923 14:39:38 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:55.923 14:39:38 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:55.923 14:39:38 -- scripts/common.sh@368 -- # return 0 00:02:55.923 14:39:38 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:55.923 14:39:38 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:02:55.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:55.923 --rc genhtml_branch_coverage=1 00:02:55.923 --rc genhtml_function_coverage=1 00:02:55.923 --rc genhtml_legend=1 00:02:55.923 --rc geninfo_all_blocks=1 00:02:55.923 --rc geninfo_unexecuted_blocks=1 00:02:55.923 00:02:55.923 ' 00:02:55.923 14:39:38 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:02:55.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:55.923 --rc genhtml_branch_coverage=1 00:02:55.923 --rc genhtml_function_coverage=1 00:02:55.923 --rc genhtml_legend=1 00:02:55.923 --rc geninfo_all_blocks=1 00:02:55.923 --rc geninfo_unexecuted_blocks=1 00:02:55.923 00:02:55.923 ' 00:02:55.923 14:39:38 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:02:55.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:55.923 --rc genhtml_branch_coverage=1 00:02:55.923 --rc genhtml_function_coverage=1 00:02:55.923 --rc genhtml_legend=1 00:02:55.923 --rc geninfo_all_blocks=1 00:02:55.923 --rc geninfo_unexecuted_blocks=1 00:02:55.923 00:02:55.923 ' 00:02:55.923 14:39:38 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:02:55.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:55.923 --rc genhtml_branch_coverage=1 00:02:55.923 --rc genhtml_function_coverage=1 00:02:55.923 --rc genhtml_legend=1 00:02:55.923 --rc geninfo_all_blocks=1 00:02:55.923 --rc geninfo_unexecuted_blocks=1 00:02:55.923 00:02:55.923 ' 00:02:55.923 14:39:38 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:55.923 14:39:38 -- nvmf/common.sh@7 -- # uname -s 00:02:55.923 14:39:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:55.923 14:39:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:55.923 14:39:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:55.923 14:39:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:55.923 14:39:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:55.923 14:39:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:55.923 14:39:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:55.923 14:39:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:55.923 14:39:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:55.923 14:39:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:55.923 14:39:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:02:55.923 14:39:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:02:55.923 14:39:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:55.923 14:39:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:55.923 14:39:38 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:55.923 14:39:38 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:55.923 14:39:38 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:55.923 14:39:38 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:55.923 14:39:38 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:55.923 14:39:38 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:55.923 14:39:38 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:55.923 14:39:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:55.923 14:39:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:55.923 14:39:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:55.923 14:39:38 -- paths/export.sh@5 -- # export PATH 00:02:55.923 14:39:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:55.923 14:39:38 -- nvmf/common.sh@51 -- # : 0 00:02:55.923 14:39:38 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:55.923 14:39:38 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:55.923 14:39:38 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:55.923 14:39:38 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:55.923 14:39:38 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:55.923 14:39:38 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:55.923 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:55.923 14:39:38 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:55.923 14:39:38 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:55.923 14:39:38 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:55.923 14:39:38 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:55.923 14:39:38 -- spdk/autotest.sh@32 -- # uname -s 00:02:55.923 14:39:38 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:55.923 14:39:38 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:55.923 14:39:38 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:55.923 14:39:38 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:55.923 14:39:38 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:55.923 14:39:38 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:55.923 14:39:38 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:55.923 14:39:38 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:55.923 14:39:38 -- spdk/autotest.sh@48 -- # udevadm_pid=530449 00:02:55.923 14:39:38 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:55.923 14:39:38 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:55.923 14:39:38 -- pm/common@17 -- # local monitor 00:02:55.923 14:39:38 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:55.923 14:39:38 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:55.923 14:39:38 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:55.923 14:39:38 -- pm/common@21 -- # date +%s 00:02:55.923 14:39:38 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:55.923 14:39:38 -- pm/common@21 -- # date +%s 00:02:55.923 14:39:38 -- pm/common@25 -- # sleep 1 00:02:55.923 14:39:38 -- pm/common@21 -- # date +%s 00:02:55.923 14:39:38 -- pm/common@21 -- # date +%s 00:02:55.923 14:39:38 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733924378 00:02:55.923 14:39:38 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733924378 00:02:55.923 14:39:38 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733924378 00:02:55.923 14:39:38 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733924378 00:02:55.924 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733924378_collect-cpu-load.pm.log 00:02:55.924 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733924378_collect-vmstat.pm.log 00:02:55.924 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733924378_collect-cpu-temp.pm.log 00:02:56.182 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733924378_collect-bmc-pm.bmc.pm.log 00:02:57.116 14:39:39 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:57.116 14:39:39 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:57.116 14:39:39 -- common/autotest_common.sh@726 -- # xtrace_disable 00:02:57.116 14:39:39 -- common/autotest_common.sh@10 -- # set +x 00:02:57.116 14:39:39 -- spdk/autotest.sh@59 -- # create_test_list 00:02:57.116 14:39:39 -- common/autotest_common.sh@752 -- # xtrace_disable 00:02:57.116 14:39:39 -- common/autotest_common.sh@10 -- # set +x 00:02:57.116 14:39:39 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:57.116 14:39:39 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:57.116 14:39:39 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:57.116 14:39:39 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:57.117 14:39:39 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:57.117 14:39:39 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:57.117 14:39:39 -- common/autotest_common.sh@1457 -- # uname 00:02:57.117 14:39:39 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:02:57.117 14:39:39 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:57.117 14:39:39 -- common/autotest_common.sh@1477 -- # uname 00:02:57.117 14:39:39 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:02:57.117 14:39:39 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:02:57.117 14:39:39 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:02:57.117 lcov: LCOV version 1.15 00:02:57.117 14:39:39 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:29.175 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:29.175 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:35.727 14:40:17 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:35.727 14:40:17 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:35.727 14:40:17 -- common/autotest_common.sh@10 -- # set +x 00:03:35.727 14:40:17 -- spdk/autotest.sh@78 -- # rm -f 00:03:35.727 14:40:17 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:36.294 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:03:36.294 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:03:36.294 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:03:36.294 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:03:36.294 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:03:36.294 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:03:36.294 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:03:36.294 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:03:36.294 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:03:36.294 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:03:36.294 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:03:36.294 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:03:36.294 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:03:36.294 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:03:36.294 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:03:36.294 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:03:36.294 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:03:36.554 14:40:19 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:36.554 14:40:19 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:36.554 14:40:19 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:36.554 14:40:19 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:03:36.554 14:40:19 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:03:36.554 14:40:19 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:03:36.554 14:40:19 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:03:36.554 14:40:19 -- common/autotest_common.sh@1669 -- # bdf=0000:88:00.0 00:03:36.554 14:40:19 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:36.554 14:40:19 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:03:36.554 14:40:19 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:36.554 14:40:19 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:36.554 14:40:19 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:36.554 14:40:19 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:36.554 14:40:19 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:36.554 14:40:19 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:36.554 14:40:19 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:36.554 14:40:19 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:36.554 14:40:19 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:36.554 No valid GPT data, bailing 00:03:36.554 14:40:19 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:36.554 14:40:19 -- scripts/common.sh@394 -- # pt= 00:03:36.554 14:40:19 -- scripts/common.sh@395 -- # return 1 00:03:36.554 14:40:19 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:36.554 1+0 records in 00:03:36.554 1+0 records out 00:03:36.554 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00204201 s, 514 MB/s 00:03:36.554 14:40:19 -- spdk/autotest.sh@105 -- # sync 00:03:36.554 14:40:19 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:36.554 14:40:19 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:36.554 14:40:19 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:39.088 14:40:21 -- spdk/autotest.sh@111 -- # uname -s 00:03:39.088 14:40:21 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:39.088 14:40:21 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:39.088 14:40:21 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:39.655 Hugepages 00:03:39.655 node hugesize free / total 00:03:39.655 node0 1048576kB 0 / 0 00:03:39.655 node0 2048kB 0 / 0 00:03:39.655 node1 1048576kB 0 / 0 00:03:39.655 node1 2048kB 0 / 0 00:03:39.655 00:03:39.655 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:39.915 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:03:39.915 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:03:39.915 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:03:39.915 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:03:39.915 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:03:39.915 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:03:39.915 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:03:39.915 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:03:39.915 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:03:39.915 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:03:39.915 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:03:39.915 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:03:39.915 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:03:39.915 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:03:39.915 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:03:39.915 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:03:39.915 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:03:39.915 14:40:22 -- spdk/autotest.sh@117 -- # uname -s 00:03:39.915 14:40:22 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:39.915 14:40:22 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:39.915 14:40:22 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:41.295 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:41.295 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:41.295 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:41.295 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:41.295 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:41.295 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:41.295 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:41.295 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:41.295 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:41.295 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:41.295 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:41.295 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:41.295 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:41.295 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:41.295 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:41.295 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:42.288 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:42.288 14:40:24 -- common/autotest_common.sh@1517 -- # sleep 1 00:03:43.223 14:40:25 -- common/autotest_common.sh@1518 -- # bdfs=() 00:03:43.223 14:40:25 -- common/autotest_common.sh@1518 -- # local bdfs 00:03:43.223 14:40:25 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:43.223 14:40:25 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:43.223 14:40:25 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:43.223 14:40:25 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:43.223 14:40:25 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:43.223 14:40:25 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:43.223 14:40:25 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:43.482 14:40:26 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:43.482 14:40:26 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:03:43.482 14:40:26 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:44.419 Waiting for block devices as requested 00:03:44.419 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:03:44.677 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:44.677 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:44.936 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:44.936 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:44.936 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:44.936 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:45.195 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:45.195 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:45.195 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:45.455 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:45.455 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:45.455 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:45.455 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:45.714 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:45.714 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:45.714 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:45.972 14:40:28 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:45.972 14:40:28 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:03:45.972 14:40:28 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:03:45.972 14:40:28 -- common/autotest_common.sh@1487 -- # grep 0000:88:00.0/nvme/nvme 00:03:45.972 14:40:28 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:03:45.972 14:40:28 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:03:45.972 14:40:28 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:03:45.972 14:40:28 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:03:45.972 14:40:28 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:03:45.972 14:40:28 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:03:45.972 14:40:28 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:03:45.972 14:40:28 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:45.972 14:40:28 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:45.972 14:40:28 -- common/autotest_common.sh@1531 -- # oacs=' 0xf' 00:03:45.972 14:40:28 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:45.972 14:40:28 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:45.972 14:40:28 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:03:45.972 14:40:28 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:45.972 14:40:28 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:45.972 14:40:28 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:45.972 14:40:28 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:45.972 14:40:28 -- common/autotest_common.sh@1543 -- # continue 00:03:45.972 14:40:28 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:45.972 14:40:28 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:45.972 14:40:28 -- common/autotest_common.sh@10 -- # set +x 00:03:45.972 14:40:28 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:45.972 14:40:28 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:45.972 14:40:28 -- common/autotest_common.sh@10 -- # set +x 00:03:45.972 14:40:28 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:47.350 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:47.350 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:47.350 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:47.350 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:47.350 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:47.350 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:47.350 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:47.350 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:47.350 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:47.350 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:47.350 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:47.350 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:47.350 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:47.350 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:47.350 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:47.350 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:48.289 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:48.289 14:40:30 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:48.289 14:40:30 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:48.289 14:40:30 -- common/autotest_common.sh@10 -- # set +x 00:03:48.289 14:40:30 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:48.289 14:40:30 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:03:48.289 14:40:31 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:03:48.289 14:40:31 -- common/autotest_common.sh@1563 -- # bdfs=() 00:03:48.289 14:40:31 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:03:48.289 14:40:31 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:03:48.289 14:40:31 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:03:48.289 14:40:31 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:03:48.289 14:40:31 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:48.289 14:40:31 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:48.289 14:40:31 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:48.289 14:40:31 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:48.289 14:40:31 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:48.549 14:40:31 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:48.549 14:40:31 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:03:48.549 14:40:31 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:48.549 14:40:31 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:03:48.549 14:40:31 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:03:48.549 14:40:31 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:48.549 14:40:31 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:03:48.549 14:40:31 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:03:48.549 14:40:31 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:88:00.0 00:03:48.549 14:40:31 -- common/autotest_common.sh@1579 -- # [[ -z 0000:88:00.0 ]] 00:03:48.549 14:40:31 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=540966 00:03:48.549 14:40:31 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:48.549 14:40:31 -- common/autotest_common.sh@1585 -- # waitforlisten 540966 00:03:48.549 14:40:31 -- common/autotest_common.sh@835 -- # '[' -z 540966 ']' 00:03:48.549 14:40:31 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:48.549 14:40:31 -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:48.549 14:40:31 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:48.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:48.549 14:40:31 -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:48.549 14:40:31 -- common/autotest_common.sh@10 -- # set +x 00:03:48.549 [2024-12-11 14:40:31.122471] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:03:48.549 [2024-12-11 14:40:31.122572] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid540966 ] 00:03:48.549 [2024-12-11 14:40:31.186272] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:48.549 [2024-12-11 14:40:31.240639] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:03:48.808 14:40:31 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:48.808 14:40:31 -- common/autotest_common.sh@868 -- # return 0 00:03:48.808 14:40:31 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:03:48.808 14:40:31 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:03:48.808 14:40:31 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:03:52.102 nvme0n1 00:03:52.102 14:40:34 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:52.102 [2024-12-11 14:40:34.840287] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:03:52.102 [2024-12-11 14:40:34.840326] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:03:52.102 request: 00:03:52.102 { 00:03:52.102 "nvme_ctrlr_name": "nvme0", 00:03:52.102 "password": "test", 00:03:52.102 "method": "bdev_nvme_opal_revert", 00:03:52.102 "req_id": 1 00:03:52.102 } 00:03:52.102 Got JSON-RPC error response 00:03:52.102 response: 00:03:52.102 { 00:03:52.102 "code": -32603, 00:03:52.102 "message": "Internal error" 00:03:52.102 } 00:03:52.103 14:40:34 -- common/autotest_common.sh@1591 -- # true 00:03:52.103 14:40:34 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:03:52.103 14:40:34 -- common/autotest_common.sh@1595 -- # killprocess 540966 00:03:52.103 14:40:34 -- common/autotest_common.sh@954 -- # '[' -z 540966 ']' 00:03:52.103 14:40:34 -- common/autotest_common.sh@958 -- # kill -0 540966 00:03:52.103 14:40:34 -- common/autotest_common.sh@959 -- # uname 00:03:52.103 14:40:34 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:52.103 14:40:34 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 540966 00:03:52.362 14:40:34 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:52.362 14:40:34 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:52.362 14:40:34 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 540966' 00:03:52.362 killing process with pid 540966 00:03:52.362 14:40:34 -- common/autotest_common.sh@973 -- # kill 540966 00:03:52.362 14:40:34 -- common/autotest_common.sh@978 -- # wait 540966 00:03:54.270 14:40:36 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:54.270 14:40:36 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:54.270 14:40:36 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:54.270 14:40:36 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:54.270 14:40:36 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:54.270 14:40:36 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:54.270 14:40:36 -- common/autotest_common.sh@10 -- # set +x 00:03:54.270 14:40:36 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:54.270 14:40:36 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:54.270 14:40:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:54.270 14:40:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:54.271 14:40:36 -- common/autotest_common.sh@10 -- # set +x 00:03:54.271 ************************************ 00:03:54.271 START TEST env 00:03:54.271 ************************************ 00:03:54.271 14:40:36 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:54.271 * Looking for test storage... 00:03:54.271 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:54.271 14:40:36 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:54.271 14:40:36 env -- common/autotest_common.sh@1711 -- # lcov --version 00:03:54.271 14:40:36 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:54.271 14:40:36 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:54.271 14:40:36 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:54.271 14:40:36 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:54.271 14:40:36 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:54.271 14:40:36 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:54.271 14:40:36 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:54.271 14:40:36 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:54.271 14:40:36 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:54.271 14:40:36 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:54.271 14:40:36 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:54.271 14:40:36 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:54.271 14:40:36 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:54.271 14:40:36 env -- scripts/common.sh@344 -- # case "$op" in 00:03:54.271 14:40:36 env -- scripts/common.sh@345 -- # : 1 00:03:54.271 14:40:36 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:54.271 14:40:36 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:54.271 14:40:36 env -- scripts/common.sh@365 -- # decimal 1 00:03:54.271 14:40:36 env -- scripts/common.sh@353 -- # local d=1 00:03:54.271 14:40:36 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:54.271 14:40:36 env -- scripts/common.sh@355 -- # echo 1 00:03:54.271 14:40:36 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:54.271 14:40:36 env -- scripts/common.sh@366 -- # decimal 2 00:03:54.271 14:40:36 env -- scripts/common.sh@353 -- # local d=2 00:03:54.271 14:40:36 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:54.271 14:40:36 env -- scripts/common.sh@355 -- # echo 2 00:03:54.271 14:40:36 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:54.271 14:40:36 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:54.271 14:40:36 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:54.271 14:40:36 env -- scripts/common.sh@368 -- # return 0 00:03:54.271 14:40:36 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:54.271 14:40:36 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:54.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:54.271 --rc genhtml_branch_coverage=1 00:03:54.271 --rc genhtml_function_coverage=1 00:03:54.271 --rc genhtml_legend=1 00:03:54.271 --rc geninfo_all_blocks=1 00:03:54.271 --rc geninfo_unexecuted_blocks=1 00:03:54.271 00:03:54.271 ' 00:03:54.271 14:40:36 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:54.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:54.271 --rc genhtml_branch_coverage=1 00:03:54.271 --rc genhtml_function_coverage=1 00:03:54.271 --rc genhtml_legend=1 00:03:54.271 --rc geninfo_all_blocks=1 00:03:54.271 --rc geninfo_unexecuted_blocks=1 00:03:54.271 00:03:54.271 ' 00:03:54.271 14:40:36 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:54.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:54.271 --rc genhtml_branch_coverage=1 00:03:54.271 --rc genhtml_function_coverage=1 00:03:54.271 --rc genhtml_legend=1 00:03:54.271 --rc geninfo_all_blocks=1 00:03:54.271 --rc geninfo_unexecuted_blocks=1 00:03:54.271 00:03:54.271 ' 00:03:54.271 14:40:36 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:54.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:54.271 --rc genhtml_branch_coverage=1 00:03:54.271 --rc genhtml_function_coverage=1 00:03:54.271 --rc genhtml_legend=1 00:03:54.271 --rc geninfo_all_blocks=1 00:03:54.271 --rc geninfo_unexecuted_blocks=1 00:03:54.271 00:03:54.271 ' 00:03:54.271 14:40:36 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:54.271 14:40:36 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:54.271 14:40:36 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:54.271 14:40:36 env -- common/autotest_common.sh@10 -- # set +x 00:03:54.271 ************************************ 00:03:54.271 START TEST env_memory 00:03:54.271 ************************************ 00:03:54.271 14:40:36 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:54.271 00:03:54.271 00:03:54.271 CUnit - A unit testing framework for C - Version 2.1-3 00:03:54.271 http://cunit.sourceforge.net/ 00:03:54.271 00:03:54.271 00:03:54.271 Suite: memory 00:03:54.271 Test: alloc and free memory map ...[2024-12-11 14:40:36.883158] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:54.271 passed 00:03:54.271 Test: mem map translation ...[2024-12-11 14:40:36.903214] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:54.271 [2024-12-11 14:40:36.903236] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:54.271 [2024-12-11 14:40:36.903292] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:54.271 [2024-12-11 14:40:36.903305] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:54.271 passed 00:03:54.271 Test: mem map registration ...[2024-12-11 14:40:36.946886] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:54.271 [2024-12-11 14:40:36.946905] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:54.271 passed 00:03:54.271 Test: mem map adjacent registrations ...passed 00:03:54.271 00:03:54.271 Run Summary: Type Total Ran Passed Failed Inactive 00:03:54.271 suites 1 1 n/a 0 0 00:03:54.271 tests 4 4 4 0 0 00:03:54.271 asserts 152 152 152 0 n/a 00:03:54.271 00:03:54.271 Elapsed time = 0.147 seconds 00:03:54.271 00:03:54.271 real 0m0.155s 00:03:54.271 user 0m0.148s 00:03:54.271 sys 0m0.007s 00:03:54.271 14:40:37 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:54.271 14:40:37 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:54.271 ************************************ 00:03:54.271 END TEST env_memory 00:03:54.271 ************************************ 00:03:54.271 14:40:37 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:54.271 14:40:37 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:54.271 14:40:37 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:54.271 14:40:37 env -- common/autotest_common.sh@10 -- # set +x 00:03:54.531 ************************************ 00:03:54.531 START TEST env_vtophys 00:03:54.531 ************************************ 00:03:54.531 14:40:37 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:54.532 EAL: lib.eal log level changed from notice to debug 00:03:54.532 EAL: Detected lcore 0 as core 0 on socket 0 00:03:54.532 EAL: Detected lcore 1 as core 1 on socket 0 00:03:54.532 EAL: Detected lcore 2 as core 2 on socket 0 00:03:54.532 EAL: Detected lcore 3 as core 3 on socket 0 00:03:54.532 EAL: Detected lcore 4 as core 4 on socket 0 00:03:54.532 EAL: Detected lcore 5 as core 5 on socket 0 00:03:54.532 EAL: Detected lcore 6 as core 8 on socket 0 00:03:54.532 EAL: Detected lcore 7 as core 9 on socket 0 00:03:54.532 EAL: Detected lcore 8 as core 10 on socket 0 00:03:54.532 EAL: Detected lcore 9 as core 11 on socket 0 00:03:54.532 EAL: Detected lcore 10 as core 12 on socket 0 00:03:54.532 EAL: Detected lcore 11 as core 13 on socket 0 00:03:54.532 EAL: Detected lcore 12 as core 0 on socket 1 00:03:54.532 EAL: Detected lcore 13 as core 1 on socket 1 00:03:54.532 EAL: Detected lcore 14 as core 2 on socket 1 00:03:54.532 EAL: Detected lcore 15 as core 3 on socket 1 00:03:54.532 EAL: Detected lcore 16 as core 4 on socket 1 00:03:54.532 EAL: Detected lcore 17 as core 5 on socket 1 00:03:54.532 EAL: Detected lcore 18 as core 8 on socket 1 00:03:54.532 EAL: Detected lcore 19 as core 9 on socket 1 00:03:54.532 EAL: Detected lcore 20 as core 10 on socket 1 00:03:54.532 EAL: Detected lcore 21 as core 11 on socket 1 00:03:54.532 EAL: Detected lcore 22 as core 12 on socket 1 00:03:54.532 EAL: Detected lcore 23 as core 13 on socket 1 00:03:54.532 EAL: Detected lcore 24 as core 0 on socket 0 00:03:54.532 EAL: Detected lcore 25 as core 1 on socket 0 00:03:54.532 EAL: Detected lcore 26 as core 2 on socket 0 00:03:54.532 EAL: Detected lcore 27 as core 3 on socket 0 00:03:54.532 EAL: Detected lcore 28 as core 4 on socket 0 00:03:54.532 EAL: Detected lcore 29 as core 5 on socket 0 00:03:54.532 EAL: Detected lcore 30 as core 8 on socket 0 00:03:54.532 EAL: Detected lcore 31 as core 9 on socket 0 00:03:54.532 EAL: Detected lcore 32 as core 10 on socket 0 00:03:54.532 EAL: Detected lcore 33 as core 11 on socket 0 00:03:54.532 EAL: Detected lcore 34 as core 12 on socket 0 00:03:54.532 EAL: Detected lcore 35 as core 13 on socket 0 00:03:54.532 EAL: Detected lcore 36 as core 0 on socket 1 00:03:54.532 EAL: Detected lcore 37 as core 1 on socket 1 00:03:54.532 EAL: Detected lcore 38 as core 2 on socket 1 00:03:54.532 EAL: Detected lcore 39 as core 3 on socket 1 00:03:54.532 EAL: Detected lcore 40 as core 4 on socket 1 00:03:54.532 EAL: Detected lcore 41 as core 5 on socket 1 00:03:54.532 EAL: Detected lcore 42 as core 8 on socket 1 00:03:54.532 EAL: Detected lcore 43 as core 9 on socket 1 00:03:54.532 EAL: Detected lcore 44 as core 10 on socket 1 00:03:54.532 EAL: Detected lcore 45 as core 11 on socket 1 00:03:54.532 EAL: Detected lcore 46 as core 12 on socket 1 00:03:54.532 EAL: Detected lcore 47 as core 13 on socket 1 00:03:54.532 EAL: Maximum logical cores by configuration: 128 00:03:54.532 EAL: Detected CPU lcores: 48 00:03:54.532 EAL: Detected NUMA nodes: 2 00:03:54.532 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:54.532 EAL: Detected shared linkage of DPDK 00:03:54.532 EAL: No shared files mode enabled, IPC will be disabled 00:03:54.532 EAL: Bus pci wants IOVA as 'DC' 00:03:54.532 EAL: Buses did not request a specific IOVA mode. 00:03:54.532 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:54.532 EAL: Selected IOVA mode 'VA' 00:03:54.532 EAL: Probing VFIO support... 00:03:54.532 EAL: IOMMU type 1 (Type 1) is supported 00:03:54.532 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:54.532 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:54.532 EAL: VFIO support initialized 00:03:54.532 EAL: Ask a virtual area of 0x2e000 bytes 00:03:54.532 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:54.532 EAL: Setting up physically contiguous memory... 00:03:54.532 EAL: Setting maximum number of open files to 524288 00:03:54.532 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:54.532 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:54.532 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:54.532 EAL: Ask a virtual area of 0x61000 bytes 00:03:54.532 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:54.532 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:54.532 EAL: Ask a virtual area of 0x400000000 bytes 00:03:54.532 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:54.532 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:54.532 EAL: Ask a virtual area of 0x61000 bytes 00:03:54.532 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:54.532 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:54.532 EAL: Ask a virtual area of 0x400000000 bytes 00:03:54.532 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:54.532 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:54.532 EAL: Ask a virtual area of 0x61000 bytes 00:03:54.532 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:54.532 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:54.532 EAL: Ask a virtual area of 0x400000000 bytes 00:03:54.532 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:54.532 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:54.532 EAL: Ask a virtual area of 0x61000 bytes 00:03:54.532 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:54.532 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:54.532 EAL: Ask a virtual area of 0x400000000 bytes 00:03:54.532 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:54.532 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:54.532 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:54.532 EAL: Ask a virtual area of 0x61000 bytes 00:03:54.532 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:54.532 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:54.532 EAL: Ask a virtual area of 0x400000000 bytes 00:03:54.532 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:54.532 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:54.532 EAL: Ask a virtual area of 0x61000 bytes 00:03:54.532 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:54.532 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:54.532 EAL: Ask a virtual area of 0x400000000 bytes 00:03:54.532 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:54.532 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:54.532 EAL: Ask a virtual area of 0x61000 bytes 00:03:54.532 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:54.532 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:54.532 EAL: Ask a virtual area of 0x400000000 bytes 00:03:54.532 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:54.532 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:54.532 EAL: Ask a virtual area of 0x61000 bytes 00:03:54.532 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:54.532 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:54.532 EAL: Ask a virtual area of 0x400000000 bytes 00:03:54.532 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:54.532 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:54.532 EAL: Hugepages will be freed exactly as allocated. 00:03:54.532 EAL: No shared files mode enabled, IPC is disabled 00:03:54.532 EAL: No shared files mode enabled, IPC is disabled 00:03:54.532 EAL: TSC frequency is ~2700000 KHz 00:03:54.532 EAL: Main lcore 0 is ready (tid=7fc7011dea00;cpuset=[0]) 00:03:54.532 EAL: Trying to obtain current memory policy. 00:03:54.532 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:54.532 EAL: Restoring previous memory policy: 0 00:03:54.532 EAL: request: mp_malloc_sync 00:03:54.532 EAL: No shared files mode enabled, IPC is disabled 00:03:54.532 EAL: Heap on socket 0 was expanded by 2MB 00:03:54.532 EAL: No shared files mode enabled, IPC is disabled 00:03:54.532 EAL: No PCI address specified using 'addr=<id>' in: bus=pci 00:03:54.532 EAL: Mem event callback 'spdk:(nil)' registered 00:03:54.532 00:03:54.532 00:03:54.532 CUnit - A unit testing framework for C - Version 2.1-3 00:03:54.532 http://cunit.sourceforge.net/ 00:03:54.532 00:03:54.532 00:03:54.532 Suite: components_suite 00:03:54.532 Test: vtophys_malloc_test ...passed 00:03:54.532 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:54.532 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:54.532 EAL: Restoring previous memory policy: 4 00:03:54.532 EAL: Calling mem event callback 'spdk:(nil)' 00:03:54.532 EAL: request: mp_malloc_sync 00:03:54.532 EAL: No shared files mode enabled, IPC is disabled 00:03:54.532 EAL: Heap on socket 0 was expanded by 4MB 00:03:54.532 EAL: Calling mem event callback 'spdk:(nil)' 00:03:54.532 EAL: request: mp_malloc_sync 00:03:54.532 EAL: No shared files mode enabled, IPC is disabled 00:03:54.532 EAL: Heap on socket 0 was shrunk by 4MB 00:03:54.532 EAL: Trying to obtain current memory policy. 00:03:54.532 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:54.532 EAL: Restoring previous memory policy: 4 00:03:54.532 EAL: Calling mem event callback 'spdk:(nil)' 00:03:54.532 EAL: request: mp_malloc_sync 00:03:54.532 EAL: No shared files mode enabled, IPC is disabled 00:03:54.532 EAL: Heap on socket 0 was expanded by 6MB 00:03:54.532 EAL: Calling mem event callback 'spdk:(nil)' 00:03:54.532 EAL: request: mp_malloc_sync 00:03:54.532 EAL: No shared files mode enabled, IPC is disabled 00:03:54.532 EAL: Heap on socket 0 was shrunk by 6MB 00:03:54.532 EAL: Trying to obtain current memory policy. 00:03:54.532 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:54.532 EAL: Restoring previous memory policy: 4 00:03:54.532 EAL: Calling mem event callback 'spdk:(nil)' 00:03:54.532 EAL: request: mp_malloc_sync 00:03:54.532 EAL: No shared files mode enabled, IPC is disabled 00:03:54.532 EAL: Heap on socket 0 was expanded by 10MB 00:03:54.532 EAL: Calling mem event callback 'spdk:(nil)' 00:03:54.532 EAL: request: mp_malloc_sync 00:03:54.532 EAL: No shared files mode enabled, IPC is disabled 00:03:54.532 EAL: Heap on socket 0 was shrunk by 10MB 00:03:54.532 EAL: Trying to obtain current memory policy. 00:03:54.532 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:54.532 EAL: Restoring previous memory policy: 4 00:03:54.532 EAL: Calling mem event callback 'spdk:(nil)' 00:03:54.532 EAL: request: mp_malloc_sync 00:03:54.532 EAL: No shared files mode enabled, IPC is disabled 00:03:54.532 EAL: Heap on socket 0 was expanded by 18MB 00:03:54.532 EAL: Calling mem event callback 'spdk:(nil)' 00:03:54.532 EAL: request: mp_malloc_sync 00:03:54.532 EAL: No shared files mode enabled, IPC is disabled 00:03:54.532 EAL: Heap on socket 0 was shrunk by 18MB 00:03:54.532 EAL: Trying to obtain current memory policy. 00:03:54.533 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:54.533 EAL: Restoring previous memory policy: 4 00:03:54.533 EAL: Calling mem event callback 'spdk:(nil)' 00:03:54.533 EAL: request: mp_malloc_sync 00:03:54.533 EAL: No shared files mode enabled, IPC is disabled 00:03:54.533 EAL: Heap on socket 0 was expanded by 34MB 00:03:54.533 EAL: Calling mem event callback 'spdk:(nil)' 00:03:54.533 EAL: request: mp_malloc_sync 00:03:54.533 EAL: No shared files mode enabled, IPC is disabled 00:03:54.533 EAL: Heap on socket 0 was shrunk by 34MB 00:03:54.533 EAL: Trying to obtain current memory policy. 00:03:54.533 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:54.533 EAL: Restoring previous memory policy: 4 00:03:54.533 EAL: Calling mem event callback 'spdk:(nil)' 00:03:54.533 EAL: request: mp_malloc_sync 00:03:54.533 EAL: No shared files mode enabled, IPC is disabled 00:03:54.533 EAL: Heap on socket 0 was expanded by 66MB 00:03:54.533 EAL: Calling mem event callback 'spdk:(nil)' 00:03:54.533 EAL: request: mp_malloc_sync 00:03:54.533 EAL: No shared files mode enabled, IPC is disabled 00:03:54.533 EAL: Heap on socket 0 was shrunk by 66MB 00:03:54.533 EAL: Trying to obtain current memory policy. 00:03:54.533 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:54.533 EAL: Restoring previous memory policy: 4 00:03:54.533 EAL: Calling mem event callback 'spdk:(nil)' 00:03:54.533 EAL: request: mp_malloc_sync 00:03:54.533 EAL: No shared files mode enabled, IPC is disabled 00:03:54.533 EAL: Heap on socket 0 was expanded by 130MB 00:03:54.533 EAL: Calling mem event callback 'spdk:(nil)' 00:03:54.533 EAL: request: mp_malloc_sync 00:03:54.533 EAL: No shared files mode enabled, IPC is disabled 00:03:54.533 EAL: Heap on socket 0 was shrunk by 130MB 00:03:54.533 EAL: Trying to obtain current memory policy. 00:03:54.533 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:54.791 EAL: Restoring previous memory policy: 4 00:03:54.791 EAL: Calling mem event callback 'spdk:(nil)' 00:03:54.791 EAL: request: mp_malloc_sync 00:03:54.791 EAL: No shared files mode enabled, IPC is disabled 00:03:54.791 EAL: Heap on socket 0 was expanded by 258MB 00:03:54.791 EAL: Calling mem event callback 'spdk:(nil)' 00:03:54.791 EAL: request: mp_malloc_sync 00:03:54.791 EAL: No shared files mode enabled, IPC is disabled 00:03:54.791 EAL: Heap on socket 0 was shrunk by 258MB 00:03:54.791 EAL: Trying to obtain current memory policy. 00:03:54.791 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:55.051 EAL: Restoring previous memory policy: 4 00:03:55.051 EAL: Calling mem event callback 'spdk:(nil)' 00:03:55.051 EAL: request: mp_malloc_sync 00:03:55.051 EAL: No shared files mode enabled, IPC is disabled 00:03:55.051 EAL: Heap on socket 0 was expanded by 514MB 00:03:55.051 EAL: Calling mem event callback 'spdk:(nil)' 00:03:55.051 EAL: request: mp_malloc_sync 00:03:55.051 EAL: No shared files mode enabled, IPC is disabled 00:03:55.051 EAL: Heap on socket 0 was shrunk by 514MB 00:03:55.051 EAL: Trying to obtain current memory policy. 00:03:55.051 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:55.311 EAL: Restoring previous memory policy: 4 00:03:55.311 EAL: Calling mem event callback 'spdk:(nil)' 00:03:55.311 EAL: request: mp_malloc_sync 00:03:55.311 EAL: No shared files mode enabled, IPC is disabled 00:03:55.311 EAL: Heap on socket 0 was expanded by 1026MB 00:03:55.570 EAL: Calling mem event callback 'spdk:(nil)' 00:03:55.830 EAL: request: mp_malloc_sync 00:03:55.830 EAL: No shared files mode enabled, IPC is disabled 00:03:55.830 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:55.830 passed 00:03:55.830 00:03:55.830 Run Summary: Type Total Ran Passed Failed Inactive 00:03:55.830 suites 1 1 n/a 0 0 00:03:55.830 tests 2 2 2 0 0 00:03:55.830 asserts 497 497 497 0 n/a 00:03:55.830 00:03:55.830 Elapsed time = 1.323 seconds 00:03:55.830 EAL: Calling mem event callback 'spdk:(nil)' 00:03:55.830 EAL: request: mp_malloc_sync 00:03:55.830 EAL: No shared files mode enabled, IPC is disabled 00:03:55.830 EAL: Heap on socket 0 was shrunk by 2MB 00:03:55.830 EAL: No shared files mode enabled, IPC is disabled 00:03:55.830 EAL: No shared files mode enabled, IPC is disabled 00:03:55.830 EAL: No shared files mode enabled, IPC is disabled 00:03:55.830 00:03:55.830 real 0m1.437s 00:03:55.830 user 0m0.844s 00:03:55.830 sys 0m0.564s 00:03:55.830 14:40:38 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:55.830 14:40:38 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:55.830 ************************************ 00:03:55.830 END TEST env_vtophys 00:03:55.830 ************************************ 00:03:55.830 14:40:38 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:55.830 14:40:38 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:55.830 14:40:38 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:55.830 14:40:38 env -- common/autotest_common.sh@10 -- # set +x 00:03:55.830 ************************************ 00:03:55.830 START TEST env_pci 00:03:55.830 ************************************ 00:03:55.830 14:40:38 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:55.830 00:03:55.830 00:03:55.830 CUnit - A unit testing framework for C - Version 2.1-3 00:03:55.830 http://cunit.sourceforge.net/ 00:03:55.830 00:03:55.830 00:03:55.830 Suite: pci 00:03:55.830 Test: pci_hook ...[2024-12-11 14:40:38.543063] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 541861 has claimed it 00:03:55.830 EAL: Cannot find device (10000:00:01.0) 00:03:55.830 EAL: Failed to attach device on primary process 00:03:55.830 passed 00:03:55.830 00:03:55.830 Run Summary: Type Total Ran Passed Failed Inactive 00:03:55.830 suites 1 1 n/a 0 0 00:03:55.830 tests 1 1 1 0 0 00:03:55.830 asserts 25 25 25 0 n/a 00:03:55.830 00:03:55.830 Elapsed time = 0.021 seconds 00:03:55.830 00:03:55.830 real 0m0.034s 00:03:55.830 user 0m0.009s 00:03:55.830 sys 0m0.024s 00:03:55.830 14:40:38 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:55.830 14:40:38 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:55.830 ************************************ 00:03:55.830 END TEST env_pci 00:03:55.830 ************************************ 00:03:55.830 14:40:38 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:55.830 14:40:38 env -- env/env.sh@15 -- # uname 00:03:55.830 14:40:38 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:55.830 14:40:38 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:55.830 14:40:38 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:55.830 14:40:38 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:03:55.830 14:40:38 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:55.830 14:40:38 env -- common/autotest_common.sh@10 -- # set +x 00:03:56.090 ************************************ 00:03:56.090 START TEST env_dpdk_post_init 00:03:56.090 ************************************ 00:03:56.090 14:40:38 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:56.090 EAL: Detected CPU lcores: 48 00:03:56.090 EAL: Detected NUMA nodes: 2 00:03:56.090 EAL: Detected shared linkage of DPDK 00:03:56.090 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:56.090 EAL: Selected IOVA mode 'VA' 00:03:56.090 EAL: VFIO support initialized 00:03:56.090 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:56.090 EAL: Using IOMMU type 1 (Type 1) 00:03:56.090 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:03:56.090 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:03:56.090 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:03:56.090 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:03:56.090 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:03:56.090 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:03:56.090 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:03:56.090 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:03:56.090 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:03:56.090 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:03:56.090 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:03:56.350 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:03:56.350 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:03:56.350 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:03:56.350 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:03:56.350 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:03:56.920 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:04:00.198 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:04:00.198 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:04:00.456 Starting DPDK initialization... 00:04:00.456 Starting SPDK post initialization... 00:04:00.456 SPDK NVMe probe 00:04:00.456 Attaching to 0000:88:00.0 00:04:00.456 Attached to 0000:88:00.0 00:04:00.456 Cleaning up... 00:04:00.456 00:04:00.456 real 0m4.420s 00:04:00.456 user 0m3.026s 00:04:00.456 sys 0m0.447s 00:04:00.456 14:40:43 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:00.456 14:40:43 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:00.456 ************************************ 00:04:00.456 END TEST env_dpdk_post_init 00:04:00.456 ************************************ 00:04:00.456 14:40:43 env -- env/env.sh@26 -- # uname 00:04:00.456 14:40:43 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:00.456 14:40:43 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:00.456 14:40:43 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:00.456 14:40:43 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:00.456 14:40:43 env -- common/autotest_common.sh@10 -- # set +x 00:04:00.456 ************************************ 00:04:00.456 START TEST env_mem_callbacks 00:04:00.456 ************************************ 00:04:00.456 14:40:43 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:00.456 EAL: Detected CPU lcores: 48 00:04:00.456 EAL: Detected NUMA nodes: 2 00:04:00.456 EAL: Detected shared linkage of DPDK 00:04:00.456 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:00.456 EAL: Selected IOVA mode 'VA' 00:04:00.456 EAL: VFIO support initialized 00:04:00.456 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:00.456 00:04:00.456 00:04:00.456 CUnit - A unit testing framework for C - Version 2.1-3 00:04:00.456 http://cunit.sourceforge.net/ 00:04:00.456 00:04:00.456 00:04:00.456 Suite: memory 00:04:00.456 Test: test ... 00:04:00.456 register 0x200000200000 2097152 00:04:00.456 malloc 3145728 00:04:00.456 register 0x200000400000 4194304 00:04:00.456 buf 0x200000500000 len 3145728 PASSED 00:04:00.456 malloc 64 00:04:00.456 buf 0x2000004fff40 len 64 PASSED 00:04:00.456 malloc 4194304 00:04:00.456 register 0x200000800000 6291456 00:04:00.456 buf 0x200000a00000 len 4194304 PASSED 00:04:00.456 free 0x200000500000 3145728 00:04:00.456 free 0x2000004fff40 64 00:04:00.456 unregister 0x200000400000 4194304 PASSED 00:04:00.456 free 0x200000a00000 4194304 00:04:00.456 unregister 0x200000800000 6291456 PASSED 00:04:00.456 malloc 8388608 00:04:00.456 register 0x200000400000 10485760 00:04:00.456 buf 0x200000600000 len 8388608 PASSED 00:04:00.456 free 0x200000600000 8388608 00:04:00.456 unregister 0x200000400000 10485760 PASSED 00:04:00.456 passed 00:04:00.456 00:04:00.456 Run Summary: Type Total Ran Passed Failed Inactive 00:04:00.456 suites 1 1 n/a 0 0 00:04:00.456 tests 1 1 1 0 0 00:04:00.456 asserts 15 15 15 0 n/a 00:04:00.456 00:04:00.456 Elapsed time = 0.005 seconds 00:04:00.456 00:04:00.456 real 0m0.049s 00:04:00.456 user 0m0.010s 00:04:00.456 sys 0m0.039s 00:04:00.456 14:40:43 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:00.456 14:40:43 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:00.456 ************************************ 00:04:00.456 END TEST env_mem_callbacks 00:04:00.456 ************************************ 00:04:00.456 00:04:00.456 real 0m6.488s 00:04:00.456 user 0m4.233s 00:04:00.456 sys 0m1.303s 00:04:00.456 14:40:43 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:00.456 14:40:43 env -- common/autotest_common.sh@10 -- # set +x 00:04:00.456 ************************************ 00:04:00.456 END TEST env 00:04:00.456 ************************************ 00:04:00.456 14:40:43 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:00.456 14:40:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:00.456 14:40:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:00.456 14:40:43 -- common/autotest_common.sh@10 -- # set +x 00:04:00.456 ************************************ 00:04:00.456 START TEST rpc 00:04:00.456 ************************************ 00:04:00.456 14:40:43 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:00.715 * Looking for test storage... 00:04:00.715 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:00.715 14:40:43 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:00.715 14:40:43 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:00.715 14:40:43 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:00.715 14:40:43 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:00.715 14:40:43 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:00.715 14:40:43 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:00.715 14:40:43 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:00.715 14:40:43 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:00.715 14:40:43 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:00.715 14:40:43 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:00.715 14:40:43 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:00.715 14:40:43 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:00.715 14:40:43 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:00.715 14:40:43 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:00.715 14:40:43 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:00.715 14:40:43 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:00.715 14:40:43 rpc -- scripts/common.sh@345 -- # : 1 00:04:00.715 14:40:43 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:00.715 14:40:43 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:00.715 14:40:43 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:00.715 14:40:43 rpc -- scripts/common.sh@353 -- # local d=1 00:04:00.715 14:40:43 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:00.715 14:40:43 rpc -- scripts/common.sh@355 -- # echo 1 00:04:00.715 14:40:43 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:00.715 14:40:43 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:00.715 14:40:43 rpc -- scripts/common.sh@353 -- # local d=2 00:04:00.715 14:40:43 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:00.715 14:40:43 rpc -- scripts/common.sh@355 -- # echo 2 00:04:00.715 14:40:43 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:00.715 14:40:43 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:00.715 14:40:43 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:00.715 14:40:43 rpc -- scripts/common.sh@368 -- # return 0 00:04:00.715 14:40:43 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:00.715 14:40:43 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:00.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:00.715 --rc genhtml_branch_coverage=1 00:04:00.715 --rc genhtml_function_coverage=1 00:04:00.715 --rc genhtml_legend=1 00:04:00.715 --rc geninfo_all_blocks=1 00:04:00.715 --rc geninfo_unexecuted_blocks=1 00:04:00.715 00:04:00.715 ' 00:04:00.715 14:40:43 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:00.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:00.715 --rc genhtml_branch_coverage=1 00:04:00.715 --rc genhtml_function_coverage=1 00:04:00.715 --rc genhtml_legend=1 00:04:00.715 --rc geninfo_all_blocks=1 00:04:00.715 --rc geninfo_unexecuted_blocks=1 00:04:00.715 00:04:00.715 ' 00:04:00.715 14:40:43 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:00.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:00.715 --rc genhtml_branch_coverage=1 00:04:00.715 --rc genhtml_function_coverage=1 00:04:00.715 --rc genhtml_legend=1 00:04:00.715 --rc geninfo_all_blocks=1 00:04:00.715 --rc geninfo_unexecuted_blocks=1 00:04:00.715 00:04:00.715 ' 00:04:00.715 14:40:43 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:00.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:00.715 --rc genhtml_branch_coverage=1 00:04:00.715 --rc genhtml_function_coverage=1 00:04:00.715 --rc genhtml_legend=1 00:04:00.715 --rc geninfo_all_blocks=1 00:04:00.715 --rc geninfo_unexecuted_blocks=1 00:04:00.715 00:04:00.715 ' 00:04:00.715 14:40:43 rpc -- rpc/rpc.sh@65 -- # spdk_pid=542664 00:04:00.715 14:40:43 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:00.715 14:40:43 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:00.715 14:40:43 rpc -- rpc/rpc.sh@67 -- # waitforlisten 542664 00:04:00.715 14:40:43 rpc -- common/autotest_common.sh@835 -- # '[' -z 542664 ']' 00:04:00.715 14:40:43 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:00.715 14:40:43 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:00.715 14:40:43 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:00.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:00.715 14:40:43 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:00.715 14:40:43 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:00.715 [2024-12-11 14:40:43.412242] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:04:00.715 [2024-12-11 14:40:43.412330] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid542664 ] 00:04:00.715 [2024-12-11 14:40:43.477652] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:00.972 [2024-12-11 14:40:43.533141] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:00.972 [2024-12-11 14:40:43.533201] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 542664' to capture a snapshot of events at runtime. 00:04:00.972 [2024-12-11 14:40:43.533229] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:00.973 [2024-12-11 14:40:43.533240] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:00.973 [2024-12-11 14:40:43.533250] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid542664 for offline analysis/debug. 00:04:00.973 [2024-12-11 14:40:43.533804] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:01.230 14:40:43 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:01.230 14:40:43 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:01.230 14:40:43 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:01.231 14:40:43 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:01.231 14:40:43 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:01.231 14:40:43 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:01.231 14:40:43 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:01.231 14:40:43 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:01.231 14:40:43 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:01.231 ************************************ 00:04:01.231 START TEST rpc_integrity 00:04:01.231 ************************************ 00:04:01.231 14:40:43 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:01.231 14:40:43 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:01.231 14:40:43 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:01.231 14:40:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:01.231 14:40:43 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:01.231 14:40:43 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:01.231 14:40:43 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:01.231 14:40:43 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:01.231 14:40:43 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:01.231 14:40:43 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:01.231 14:40:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:01.231 14:40:43 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:01.231 14:40:43 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:01.231 14:40:43 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:01.231 14:40:43 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:01.231 14:40:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:01.231 14:40:43 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:01.231 14:40:43 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:01.231 { 00:04:01.231 "name": "Malloc0", 00:04:01.231 "aliases": [ 00:04:01.231 "c26db2ce-fa2d-4e61-9dc2-a5beff7046fd" 00:04:01.231 ], 00:04:01.231 "product_name": "Malloc disk", 00:04:01.231 "block_size": 512, 00:04:01.231 "num_blocks": 16384, 00:04:01.231 "uuid": "c26db2ce-fa2d-4e61-9dc2-a5beff7046fd", 00:04:01.231 "assigned_rate_limits": { 00:04:01.231 "rw_ios_per_sec": 0, 00:04:01.231 "rw_mbytes_per_sec": 0, 00:04:01.231 "r_mbytes_per_sec": 0, 00:04:01.231 "w_mbytes_per_sec": 0 00:04:01.231 }, 00:04:01.231 "claimed": false, 00:04:01.231 "zoned": false, 00:04:01.231 "supported_io_types": { 00:04:01.231 "read": true, 00:04:01.231 "write": true, 00:04:01.231 "unmap": true, 00:04:01.231 "flush": true, 00:04:01.231 "reset": true, 00:04:01.231 "nvme_admin": false, 00:04:01.231 "nvme_io": false, 00:04:01.231 "nvme_io_md": false, 00:04:01.231 "write_zeroes": true, 00:04:01.231 "zcopy": true, 00:04:01.231 "get_zone_info": false, 00:04:01.231 "zone_management": false, 00:04:01.231 "zone_append": false, 00:04:01.231 "compare": false, 00:04:01.231 "compare_and_write": false, 00:04:01.231 "abort": true, 00:04:01.231 "seek_hole": false, 00:04:01.231 "seek_data": false, 00:04:01.231 "copy": true, 00:04:01.231 "nvme_iov_md": false 00:04:01.231 }, 00:04:01.231 "memory_domains": [ 00:04:01.231 { 00:04:01.231 "dma_device_id": "system", 00:04:01.231 "dma_device_type": 1 00:04:01.231 }, 00:04:01.231 { 00:04:01.231 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:01.231 "dma_device_type": 2 00:04:01.231 } 00:04:01.231 ], 00:04:01.231 "driver_specific": {} 00:04:01.231 } 00:04:01.231 ]' 00:04:01.231 14:40:43 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:01.231 14:40:43 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:01.231 14:40:43 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:01.231 14:40:43 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:01.231 14:40:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:01.231 [2024-12-11 14:40:43.926677] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:01.231 [2024-12-11 14:40:43.926720] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:01.231 [2024-12-11 14:40:43.926742] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x16e0620 00:04:01.231 [2024-12-11 14:40:43.926756] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:01.231 [2024-12-11 14:40:43.928106] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:01.231 [2024-12-11 14:40:43.928129] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:01.231 Passthru0 00:04:01.231 14:40:43 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:01.231 14:40:43 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:01.231 14:40:43 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:01.231 14:40:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:01.231 14:40:43 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:01.231 14:40:43 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:01.231 { 00:04:01.231 "name": "Malloc0", 00:04:01.231 "aliases": [ 00:04:01.231 "c26db2ce-fa2d-4e61-9dc2-a5beff7046fd" 00:04:01.231 ], 00:04:01.231 "product_name": "Malloc disk", 00:04:01.231 "block_size": 512, 00:04:01.231 "num_blocks": 16384, 00:04:01.231 "uuid": "c26db2ce-fa2d-4e61-9dc2-a5beff7046fd", 00:04:01.231 "assigned_rate_limits": { 00:04:01.231 "rw_ios_per_sec": 0, 00:04:01.231 "rw_mbytes_per_sec": 0, 00:04:01.231 "r_mbytes_per_sec": 0, 00:04:01.231 "w_mbytes_per_sec": 0 00:04:01.231 }, 00:04:01.231 "claimed": true, 00:04:01.231 "claim_type": "exclusive_write", 00:04:01.231 "zoned": false, 00:04:01.231 "supported_io_types": { 00:04:01.231 "read": true, 00:04:01.231 "write": true, 00:04:01.231 "unmap": true, 00:04:01.231 "flush": true, 00:04:01.231 "reset": true, 00:04:01.231 "nvme_admin": false, 00:04:01.231 "nvme_io": false, 00:04:01.231 "nvme_io_md": false, 00:04:01.231 "write_zeroes": true, 00:04:01.231 "zcopy": true, 00:04:01.231 "get_zone_info": false, 00:04:01.231 "zone_management": false, 00:04:01.231 "zone_append": false, 00:04:01.231 "compare": false, 00:04:01.231 "compare_and_write": false, 00:04:01.231 "abort": true, 00:04:01.231 "seek_hole": false, 00:04:01.231 "seek_data": false, 00:04:01.231 "copy": true, 00:04:01.231 "nvme_iov_md": false 00:04:01.231 }, 00:04:01.231 "memory_domains": [ 00:04:01.231 { 00:04:01.231 "dma_device_id": "system", 00:04:01.231 "dma_device_type": 1 00:04:01.231 }, 00:04:01.231 { 00:04:01.231 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:01.231 "dma_device_type": 2 00:04:01.231 } 00:04:01.231 ], 00:04:01.231 "driver_specific": {} 00:04:01.231 }, 00:04:01.231 { 00:04:01.231 "name": "Passthru0", 00:04:01.231 "aliases": [ 00:04:01.231 "3dbf85e9-5a7f-5747-be4d-f355f6436288" 00:04:01.231 ], 00:04:01.231 "product_name": "passthru", 00:04:01.231 "block_size": 512, 00:04:01.231 "num_blocks": 16384, 00:04:01.231 "uuid": "3dbf85e9-5a7f-5747-be4d-f355f6436288", 00:04:01.231 "assigned_rate_limits": { 00:04:01.231 "rw_ios_per_sec": 0, 00:04:01.231 "rw_mbytes_per_sec": 0, 00:04:01.231 "r_mbytes_per_sec": 0, 00:04:01.231 "w_mbytes_per_sec": 0 00:04:01.231 }, 00:04:01.231 "claimed": false, 00:04:01.231 "zoned": false, 00:04:01.231 "supported_io_types": { 00:04:01.231 "read": true, 00:04:01.231 "write": true, 00:04:01.231 "unmap": true, 00:04:01.231 "flush": true, 00:04:01.231 "reset": true, 00:04:01.231 "nvme_admin": false, 00:04:01.231 "nvme_io": false, 00:04:01.231 "nvme_io_md": false, 00:04:01.231 "write_zeroes": true, 00:04:01.231 "zcopy": true, 00:04:01.231 "get_zone_info": false, 00:04:01.231 "zone_management": false, 00:04:01.231 "zone_append": false, 00:04:01.231 "compare": false, 00:04:01.231 "compare_and_write": false, 00:04:01.231 "abort": true, 00:04:01.231 "seek_hole": false, 00:04:01.231 "seek_data": false, 00:04:01.231 "copy": true, 00:04:01.231 "nvme_iov_md": false 00:04:01.231 }, 00:04:01.231 "memory_domains": [ 00:04:01.231 { 00:04:01.231 "dma_device_id": "system", 00:04:01.231 "dma_device_type": 1 00:04:01.231 }, 00:04:01.231 { 00:04:01.231 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:01.231 "dma_device_type": 2 00:04:01.231 } 00:04:01.231 ], 00:04:01.231 "driver_specific": { 00:04:01.231 "passthru": { 00:04:01.231 "name": "Passthru0", 00:04:01.231 "base_bdev_name": "Malloc0" 00:04:01.231 } 00:04:01.231 } 00:04:01.231 } 00:04:01.231 ]' 00:04:01.231 14:40:43 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:01.231 14:40:43 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:01.231 14:40:43 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:01.231 14:40:43 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:01.231 14:40:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:01.231 14:40:43 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:01.231 14:40:43 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:01.231 14:40:43 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:01.231 14:40:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:01.231 14:40:43 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:01.231 14:40:43 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:01.232 14:40:43 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:01.232 14:40:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:01.490 14:40:44 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:01.490 14:40:44 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:01.490 14:40:44 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:01.490 14:40:44 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:01.490 00:04:01.490 real 0m0.214s 00:04:01.490 user 0m0.139s 00:04:01.490 sys 0m0.019s 00:04:01.490 14:40:44 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:01.490 14:40:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:01.490 ************************************ 00:04:01.490 END TEST rpc_integrity 00:04:01.490 ************************************ 00:04:01.490 14:40:44 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:01.490 14:40:44 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:01.490 14:40:44 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:01.490 14:40:44 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:01.490 ************************************ 00:04:01.490 START TEST rpc_plugins 00:04:01.490 ************************************ 00:04:01.490 14:40:44 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:01.490 14:40:44 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:01.490 14:40:44 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:01.490 14:40:44 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:01.490 14:40:44 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:01.490 14:40:44 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:01.490 14:40:44 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:01.490 14:40:44 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:01.490 14:40:44 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:01.490 14:40:44 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:01.490 14:40:44 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:01.490 { 00:04:01.490 "name": "Malloc1", 00:04:01.490 "aliases": [ 00:04:01.490 "681e0d16-d69e-4973-ae0b-153ad984e137" 00:04:01.490 ], 00:04:01.490 "product_name": "Malloc disk", 00:04:01.490 "block_size": 4096, 00:04:01.490 "num_blocks": 256, 00:04:01.490 "uuid": "681e0d16-d69e-4973-ae0b-153ad984e137", 00:04:01.490 "assigned_rate_limits": { 00:04:01.490 "rw_ios_per_sec": 0, 00:04:01.490 "rw_mbytes_per_sec": 0, 00:04:01.490 "r_mbytes_per_sec": 0, 00:04:01.490 "w_mbytes_per_sec": 0 00:04:01.490 }, 00:04:01.490 "claimed": false, 00:04:01.490 "zoned": false, 00:04:01.490 "supported_io_types": { 00:04:01.490 "read": true, 00:04:01.490 "write": true, 00:04:01.490 "unmap": true, 00:04:01.490 "flush": true, 00:04:01.490 "reset": true, 00:04:01.490 "nvme_admin": false, 00:04:01.490 "nvme_io": false, 00:04:01.490 "nvme_io_md": false, 00:04:01.490 "write_zeroes": true, 00:04:01.490 "zcopy": true, 00:04:01.490 "get_zone_info": false, 00:04:01.490 "zone_management": false, 00:04:01.490 "zone_append": false, 00:04:01.490 "compare": false, 00:04:01.490 "compare_and_write": false, 00:04:01.490 "abort": true, 00:04:01.490 "seek_hole": false, 00:04:01.490 "seek_data": false, 00:04:01.490 "copy": true, 00:04:01.490 "nvme_iov_md": false 00:04:01.490 }, 00:04:01.490 "memory_domains": [ 00:04:01.490 { 00:04:01.490 "dma_device_id": "system", 00:04:01.490 "dma_device_type": 1 00:04:01.490 }, 00:04:01.490 { 00:04:01.490 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:01.490 "dma_device_type": 2 00:04:01.490 } 00:04:01.490 ], 00:04:01.490 "driver_specific": {} 00:04:01.490 } 00:04:01.490 ]' 00:04:01.490 14:40:44 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:01.490 14:40:44 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:01.490 14:40:44 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:01.490 14:40:44 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:01.490 14:40:44 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:01.490 14:40:44 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:01.490 14:40:44 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:01.490 14:40:44 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:01.490 14:40:44 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:01.490 14:40:44 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:01.490 14:40:44 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:01.490 14:40:44 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:01.490 14:40:44 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:01.490 00:04:01.490 real 0m0.105s 00:04:01.490 user 0m0.073s 00:04:01.490 sys 0m0.004s 00:04:01.490 14:40:44 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:01.490 14:40:44 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:01.490 ************************************ 00:04:01.490 END TEST rpc_plugins 00:04:01.490 ************************************ 00:04:01.490 14:40:44 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:01.490 14:40:44 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:01.490 14:40:44 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:01.490 14:40:44 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:01.490 ************************************ 00:04:01.490 START TEST rpc_trace_cmd_test 00:04:01.490 ************************************ 00:04:01.490 14:40:44 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:01.490 14:40:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:01.490 14:40:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:01.490 14:40:44 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:01.490 14:40:44 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:01.490 14:40:44 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:01.490 14:40:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:01.490 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid542664", 00:04:01.490 "tpoint_group_mask": "0x8", 00:04:01.490 "iscsi_conn": { 00:04:01.490 "mask": "0x2", 00:04:01.490 "tpoint_mask": "0x0" 00:04:01.490 }, 00:04:01.490 "scsi": { 00:04:01.490 "mask": "0x4", 00:04:01.490 "tpoint_mask": "0x0" 00:04:01.490 }, 00:04:01.490 "bdev": { 00:04:01.490 "mask": "0x8", 00:04:01.490 "tpoint_mask": "0xffffffffffffffff" 00:04:01.490 }, 00:04:01.490 "nvmf_rdma": { 00:04:01.490 "mask": "0x10", 00:04:01.490 "tpoint_mask": "0x0" 00:04:01.490 }, 00:04:01.490 "nvmf_tcp": { 00:04:01.490 "mask": "0x20", 00:04:01.490 "tpoint_mask": "0x0" 00:04:01.490 }, 00:04:01.490 "ftl": { 00:04:01.490 "mask": "0x40", 00:04:01.490 "tpoint_mask": "0x0" 00:04:01.490 }, 00:04:01.490 "blobfs": { 00:04:01.490 "mask": "0x80", 00:04:01.490 "tpoint_mask": "0x0" 00:04:01.490 }, 00:04:01.491 "dsa": { 00:04:01.491 "mask": "0x200", 00:04:01.491 "tpoint_mask": "0x0" 00:04:01.491 }, 00:04:01.491 "thread": { 00:04:01.491 "mask": "0x400", 00:04:01.491 "tpoint_mask": "0x0" 00:04:01.491 }, 00:04:01.491 "nvme_pcie": { 00:04:01.491 "mask": "0x800", 00:04:01.491 "tpoint_mask": "0x0" 00:04:01.491 }, 00:04:01.491 "iaa": { 00:04:01.491 "mask": "0x1000", 00:04:01.491 "tpoint_mask": "0x0" 00:04:01.491 }, 00:04:01.491 "nvme_tcp": { 00:04:01.491 "mask": "0x2000", 00:04:01.491 "tpoint_mask": "0x0" 00:04:01.491 }, 00:04:01.491 "bdev_nvme": { 00:04:01.491 "mask": "0x4000", 00:04:01.491 "tpoint_mask": "0x0" 00:04:01.491 }, 00:04:01.491 "sock": { 00:04:01.491 "mask": "0x8000", 00:04:01.491 "tpoint_mask": "0x0" 00:04:01.491 }, 00:04:01.491 "blob": { 00:04:01.491 "mask": "0x10000", 00:04:01.491 "tpoint_mask": "0x0" 00:04:01.491 }, 00:04:01.491 "bdev_raid": { 00:04:01.491 "mask": "0x20000", 00:04:01.491 "tpoint_mask": "0x0" 00:04:01.491 }, 00:04:01.491 "scheduler": { 00:04:01.491 "mask": "0x40000", 00:04:01.491 "tpoint_mask": "0x0" 00:04:01.491 } 00:04:01.491 }' 00:04:01.491 14:40:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:01.749 14:40:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:01.749 14:40:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:01.749 14:40:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:01.749 14:40:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:01.749 14:40:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:01.749 14:40:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:01.749 14:40:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:01.749 14:40:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:01.749 14:40:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:01.749 00:04:01.749 real 0m0.180s 00:04:01.749 user 0m0.157s 00:04:01.749 sys 0m0.016s 00:04:01.749 14:40:44 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:01.749 14:40:44 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:01.749 ************************************ 00:04:01.749 END TEST rpc_trace_cmd_test 00:04:01.749 ************************************ 00:04:01.749 14:40:44 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:01.749 14:40:44 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:01.749 14:40:44 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:01.749 14:40:44 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:01.749 14:40:44 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:01.749 14:40:44 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:01.749 ************************************ 00:04:01.749 START TEST rpc_daemon_integrity 00:04:01.749 ************************************ 00:04:01.749 14:40:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:01.749 14:40:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:01.749 14:40:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:01.749 14:40:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:01.749 14:40:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:01.749 14:40:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:01.749 14:40:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:01.749 14:40:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:01.749 14:40:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:01.749 14:40:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:01.749 14:40:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:01.749 14:40:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:01.749 14:40:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:01.749 14:40:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:01.749 14:40:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:01.749 14:40:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:02.008 14:40:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:02.008 14:40:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:02.008 { 00:04:02.008 "name": "Malloc2", 00:04:02.008 "aliases": [ 00:04:02.008 "b1a12995-0c29-4a9c-a799-b8e50b909a73" 00:04:02.008 ], 00:04:02.008 "product_name": "Malloc disk", 00:04:02.008 "block_size": 512, 00:04:02.008 "num_blocks": 16384, 00:04:02.008 "uuid": "b1a12995-0c29-4a9c-a799-b8e50b909a73", 00:04:02.008 "assigned_rate_limits": { 00:04:02.008 "rw_ios_per_sec": 0, 00:04:02.008 "rw_mbytes_per_sec": 0, 00:04:02.008 "r_mbytes_per_sec": 0, 00:04:02.008 "w_mbytes_per_sec": 0 00:04:02.008 }, 00:04:02.008 "claimed": false, 00:04:02.008 "zoned": false, 00:04:02.008 "supported_io_types": { 00:04:02.008 "read": true, 00:04:02.008 "write": true, 00:04:02.008 "unmap": true, 00:04:02.008 "flush": true, 00:04:02.008 "reset": true, 00:04:02.008 "nvme_admin": false, 00:04:02.008 "nvme_io": false, 00:04:02.008 "nvme_io_md": false, 00:04:02.008 "write_zeroes": true, 00:04:02.008 "zcopy": true, 00:04:02.008 "get_zone_info": false, 00:04:02.008 "zone_management": false, 00:04:02.008 "zone_append": false, 00:04:02.008 "compare": false, 00:04:02.008 "compare_and_write": false, 00:04:02.008 "abort": true, 00:04:02.008 "seek_hole": false, 00:04:02.008 "seek_data": false, 00:04:02.008 "copy": true, 00:04:02.008 "nvme_iov_md": false 00:04:02.008 }, 00:04:02.008 "memory_domains": [ 00:04:02.008 { 00:04:02.008 "dma_device_id": "system", 00:04:02.008 "dma_device_type": 1 00:04:02.008 }, 00:04:02.008 { 00:04:02.008 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:02.008 "dma_device_type": 2 00:04:02.008 } 00:04:02.008 ], 00:04:02.008 "driver_specific": {} 00:04:02.008 } 00:04:02.008 ]' 00:04:02.008 14:40:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:02.008 14:40:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:02.008 14:40:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:02.008 14:40:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:02.008 14:40:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:02.008 [2024-12-11 14:40:44.561024] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:02.008 [2024-12-11 14:40:44.561078] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:02.008 [2024-12-11 14:40:44.561099] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1824060 00:04:02.008 [2024-12-11 14:40:44.561111] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:02.008 [2024-12-11 14:40:44.562286] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:02.008 [2024-12-11 14:40:44.562314] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:02.008 Passthru0 00:04:02.008 14:40:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:02.008 14:40:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:02.008 14:40:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:02.008 14:40:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:02.008 14:40:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:02.008 14:40:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:02.008 { 00:04:02.008 "name": "Malloc2", 00:04:02.008 "aliases": [ 00:04:02.008 "b1a12995-0c29-4a9c-a799-b8e50b909a73" 00:04:02.008 ], 00:04:02.008 "product_name": "Malloc disk", 00:04:02.008 "block_size": 512, 00:04:02.008 "num_blocks": 16384, 00:04:02.008 "uuid": "b1a12995-0c29-4a9c-a799-b8e50b909a73", 00:04:02.008 "assigned_rate_limits": { 00:04:02.008 "rw_ios_per_sec": 0, 00:04:02.008 "rw_mbytes_per_sec": 0, 00:04:02.008 "r_mbytes_per_sec": 0, 00:04:02.008 "w_mbytes_per_sec": 0 00:04:02.008 }, 00:04:02.008 "claimed": true, 00:04:02.008 "claim_type": "exclusive_write", 00:04:02.008 "zoned": false, 00:04:02.008 "supported_io_types": { 00:04:02.008 "read": true, 00:04:02.008 "write": true, 00:04:02.008 "unmap": true, 00:04:02.008 "flush": true, 00:04:02.008 "reset": true, 00:04:02.008 "nvme_admin": false, 00:04:02.008 "nvme_io": false, 00:04:02.008 "nvme_io_md": false, 00:04:02.008 "write_zeroes": true, 00:04:02.008 "zcopy": true, 00:04:02.008 "get_zone_info": false, 00:04:02.008 "zone_management": false, 00:04:02.008 "zone_append": false, 00:04:02.008 "compare": false, 00:04:02.008 "compare_and_write": false, 00:04:02.008 "abort": true, 00:04:02.008 "seek_hole": false, 00:04:02.008 "seek_data": false, 00:04:02.008 "copy": true, 00:04:02.008 "nvme_iov_md": false 00:04:02.008 }, 00:04:02.008 "memory_domains": [ 00:04:02.008 { 00:04:02.008 "dma_device_id": "system", 00:04:02.008 "dma_device_type": 1 00:04:02.008 }, 00:04:02.008 { 00:04:02.008 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:02.008 "dma_device_type": 2 00:04:02.008 } 00:04:02.008 ], 00:04:02.008 "driver_specific": {} 00:04:02.008 }, 00:04:02.008 { 00:04:02.008 "name": "Passthru0", 00:04:02.008 "aliases": [ 00:04:02.008 "1814cb7a-d43d-5a01-afea-57279e63af00" 00:04:02.008 ], 00:04:02.008 "product_name": "passthru", 00:04:02.008 "block_size": 512, 00:04:02.008 "num_blocks": 16384, 00:04:02.008 "uuid": "1814cb7a-d43d-5a01-afea-57279e63af00", 00:04:02.008 "assigned_rate_limits": { 00:04:02.008 "rw_ios_per_sec": 0, 00:04:02.008 "rw_mbytes_per_sec": 0, 00:04:02.008 "r_mbytes_per_sec": 0, 00:04:02.008 "w_mbytes_per_sec": 0 00:04:02.008 }, 00:04:02.008 "claimed": false, 00:04:02.008 "zoned": false, 00:04:02.008 "supported_io_types": { 00:04:02.008 "read": true, 00:04:02.008 "write": true, 00:04:02.008 "unmap": true, 00:04:02.008 "flush": true, 00:04:02.008 "reset": true, 00:04:02.008 "nvme_admin": false, 00:04:02.009 "nvme_io": false, 00:04:02.009 "nvme_io_md": false, 00:04:02.009 "write_zeroes": true, 00:04:02.009 "zcopy": true, 00:04:02.009 "get_zone_info": false, 00:04:02.009 "zone_management": false, 00:04:02.009 "zone_append": false, 00:04:02.009 "compare": false, 00:04:02.009 "compare_and_write": false, 00:04:02.009 "abort": true, 00:04:02.009 "seek_hole": false, 00:04:02.009 "seek_data": false, 00:04:02.009 "copy": true, 00:04:02.009 "nvme_iov_md": false 00:04:02.009 }, 00:04:02.009 "memory_domains": [ 00:04:02.009 { 00:04:02.009 "dma_device_id": "system", 00:04:02.009 "dma_device_type": 1 00:04:02.009 }, 00:04:02.009 { 00:04:02.009 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:02.009 "dma_device_type": 2 00:04:02.009 } 00:04:02.009 ], 00:04:02.009 "driver_specific": { 00:04:02.009 "passthru": { 00:04:02.009 "name": "Passthru0", 00:04:02.009 "base_bdev_name": "Malloc2" 00:04:02.009 } 00:04:02.009 } 00:04:02.009 } 00:04:02.009 ]' 00:04:02.009 14:40:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:02.009 14:40:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:02.009 14:40:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:02.009 14:40:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:02.009 14:40:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:02.009 14:40:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:02.009 14:40:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:02.009 14:40:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:02.009 14:40:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:02.009 14:40:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:02.009 14:40:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:02.009 14:40:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:02.009 14:40:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:02.009 14:40:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:02.009 14:40:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:02.009 14:40:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:02.009 14:40:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:02.009 00:04:02.009 real 0m0.212s 00:04:02.009 user 0m0.134s 00:04:02.009 sys 0m0.021s 00:04:02.009 14:40:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:02.009 14:40:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:02.009 ************************************ 00:04:02.009 END TEST rpc_daemon_integrity 00:04:02.009 ************************************ 00:04:02.009 14:40:44 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:02.009 14:40:44 rpc -- rpc/rpc.sh@84 -- # killprocess 542664 00:04:02.009 14:40:44 rpc -- common/autotest_common.sh@954 -- # '[' -z 542664 ']' 00:04:02.009 14:40:44 rpc -- common/autotest_common.sh@958 -- # kill -0 542664 00:04:02.009 14:40:44 rpc -- common/autotest_common.sh@959 -- # uname 00:04:02.009 14:40:44 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:02.009 14:40:44 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 542664 00:04:02.009 14:40:44 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:02.009 14:40:44 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:02.009 14:40:44 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 542664' 00:04:02.009 killing process with pid 542664 00:04:02.009 14:40:44 rpc -- common/autotest_common.sh@973 -- # kill 542664 00:04:02.009 14:40:44 rpc -- common/autotest_common.sh@978 -- # wait 542664 00:04:02.575 00:04:02.575 real 0m1.943s 00:04:02.575 user 0m2.401s 00:04:02.575 sys 0m0.591s 00:04:02.575 14:40:45 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:02.575 14:40:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:02.575 ************************************ 00:04:02.575 END TEST rpc 00:04:02.575 ************************************ 00:04:02.575 14:40:45 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:02.575 14:40:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:02.575 14:40:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:02.575 14:40:45 -- common/autotest_common.sh@10 -- # set +x 00:04:02.575 ************************************ 00:04:02.575 START TEST skip_rpc 00:04:02.575 ************************************ 00:04:02.575 14:40:45 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:02.575 * Looking for test storage... 00:04:02.575 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:02.575 14:40:45 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:02.575 14:40:45 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:02.575 14:40:45 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:02.575 14:40:45 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:02.575 14:40:45 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:02.575 14:40:45 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:02.575 14:40:45 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:02.575 14:40:45 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:02.575 14:40:45 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:02.575 14:40:45 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:02.575 14:40:45 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:02.575 14:40:45 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:02.575 14:40:45 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:02.575 14:40:45 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:02.575 14:40:45 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:02.575 14:40:45 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:02.575 14:40:45 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:02.575 14:40:45 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:02.575 14:40:45 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:02.575 14:40:45 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:02.575 14:40:45 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:02.575 14:40:45 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:02.575 14:40:45 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:02.575 14:40:45 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:02.575 14:40:45 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:02.575 14:40:45 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:02.575 14:40:45 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:02.575 14:40:45 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:02.575 14:40:45 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:02.575 14:40:45 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:02.575 14:40:45 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:02.575 14:40:45 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:02.575 14:40:45 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:02.575 14:40:45 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:02.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.575 --rc genhtml_branch_coverage=1 00:04:02.575 --rc genhtml_function_coverage=1 00:04:02.575 --rc genhtml_legend=1 00:04:02.575 --rc geninfo_all_blocks=1 00:04:02.575 --rc geninfo_unexecuted_blocks=1 00:04:02.575 00:04:02.575 ' 00:04:02.575 14:40:45 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:02.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.575 --rc genhtml_branch_coverage=1 00:04:02.575 --rc genhtml_function_coverage=1 00:04:02.575 --rc genhtml_legend=1 00:04:02.575 --rc geninfo_all_blocks=1 00:04:02.575 --rc geninfo_unexecuted_blocks=1 00:04:02.575 00:04:02.575 ' 00:04:02.575 14:40:45 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:02.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.575 --rc genhtml_branch_coverage=1 00:04:02.575 --rc genhtml_function_coverage=1 00:04:02.575 --rc genhtml_legend=1 00:04:02.575 --rc geninfo_all_blocks=1 00:04:02.575 --rc geninfo_unexecuted_blocks=1 00:04:02.575 00:04:02.575 ' 00:04:02.575 14:40:45 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:02.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.575 --rc genhtml_branch_coverage=1 00:04:02.575 --rc genhtml_function_coverage=1 00:04:02.575 --rc genhtml_legend=1 00:04:02.575 --rc geninfo_all_blocks=1 00:04:02.575 --rc geninfo_unexecuted_blocks=1 00:04:02.575 00:04:02.575 ' 00:04:02.575 14:40:45 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:02.575 14:40:45 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:02.575 14:40:45 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:02.575 14:40:45 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:02.575 14:40:45 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:02.575 14:40:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:02.833 ************************************ 00:04:02.833 START TEST skip_rpc 00:04:02.833 ************************************ 00:04:02.833 14:40:45 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:02.833 14:40:45 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=542996 00:04:02.833 14:40:45 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:02.833 14:40:45 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:02.833 14:40:45 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:02.833 [2024-12-11 14:40:45.425736] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:04:02.833 [2024-12-11 14:40:45.425816] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid542996 ] 00:04:02.833 [2024-12-11 14:40:45.491233] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:02.833 [2024-12-11 14:40:45.549075] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:08.156 14:40:50 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:08.156 14:40:50 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:08.156 14:40:50 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:08.156 14:40:50 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:08.156 14:40:50 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:08.156 14:40:50 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:08.156 14:40:50 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:08.156 14:40:50 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:08.156 14:40:50 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:08.156 14:40:50 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:08.156 14:40:50 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:08.156 14:40:50 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:08.156 14:40:50 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:08.156 14:40:50 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:08.156 14:40:50 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:08.156 14:40:50 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:08.156 14:40:50 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 542996 00:04:08.156 14:40:50 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 542996 ']' 00:04:08.156 14:40:50 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 542996 00:04:08.156 14:40:50 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:08.156 14:40:50 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:08.156 14:40:50 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 542996 00:04:08.156 14:40:50 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:08.156 14:40:50 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:08.156 14:40:50 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 542996' 00:04:08.156 killing process with pid 542996 00:04:08.156 14:40:50 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 542996 00:04:08.156 14:40:50 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 542996 00:04:08.156 00:04:08.156 real 0m5.448s 00:04:08.156 user 0m5.127s 00:04:08.156 sys 0m0.328s 00:04:08.156 14:40:50 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:08.156 14:40:50 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:08.156 ************************************ 00:04:08.156 END TEST skip_rpc 00:04:08.156 ************************************ 00:04:08.156 14:40:50 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:08.156 14:40:50 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:08.156 14:40:50 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:08.156 14:40:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:08.156 ************************************ 00:04:08.156 START TEST skip_rpc_with_json 00:04:08.156 ************************************ 00:04:08.156 14:40:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:08.156 14:40:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:08.156 14:40:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=543689 00:04:08.156 14:40:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:08.156 14:40:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:08.156 14:40:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 543689 00:04:08.156 14:40:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 543689 ']' 00:04:08.156 14:40:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:08.156 14:40:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:08.156 14:40:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:08.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:08.156 14:40:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:08.156 14:40:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:08.156 [2024-12-11 14:40:50.922657] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:04:08.156 [2024-12-11 14:40:50.922765] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid543689 ] 00:04:08.413 [2024-12-11 14:40:50.988942] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:08.413 [2024-12-11 14:40:51.048515] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:08.671 14:40:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:08.671 14:40:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:08.671 14:40:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:08.671 14:40:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:08.671 14:40:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:08.671 [2024-12-11 14:40:51.324315] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:08.671 request: 00:04:08.671 { 00:04:08.671 "trtype": "tcp", 00:04:08.671 "method": "nvmf_get_transports", 00:04:08.671 "req_id": 1 00:04:08.671 } 00:04:08.671 Got JSON-RPC error response 00:04:08.671 response: 00:04:08.671 { 00:04:08.671 "code": -19, 00:04:08.671 "message": "No such device" 00:04:08.671 } 00:04:08.671 14:40:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:08.671 14:40:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:08.671 14:40:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:08.671 14:40:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:08.671 [2024-12-11 14:40:51.332423] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:08.671 14:40:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:08.671 14:40:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:08.671 14:40:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:08.671 14:40:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:08.930 14:40:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:08.930 14:40:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:08.930 { 00:04:08.930 "subsystems": [ 00:04:08.930 { 00:04:08.930 "subsystem": "fsdev", 00:04:08.930 "config": [ 00:04:08.930 { 00:04:08.930 "method": "fsdev_set_opts", 00:04:08.930 "params": { 00:04:08.930 "fsdev_io_pool_size": 65535, 00:04:08.930 "fsdev_io_cache_size": 256 00:04:08.930 } 00:04:08.930 } 00:04:08.930 ] 00:04:08.930 }, 00:04:08.930 { 00:04:08.930 "subsystem": "vfio_user_target", 00:04:08.930 "config": null 00:04:08.930 }, 00:04:08.930 { 00:04:08.930 "subsystem": "keyring", 00:04:08.930 "config": [] 00:04:08.930 }, 00:04:08.930 { 00:04:08.930 "subsystem": "iobuf", 00:04:08.930 "config": [ 00:04:08.930 { 00:04:08.930 "method": "iobuf_set_options", 00:04:08.930 "params": { 00:04:08.930 "small_pool_count": 8192, 00:04:08.930 "large_pool_count": 1024, 00:04:08.930 "small_bufsize": 8192, 00:04:08.930 "large_bufsize": 135168, 00:04:08.930 "enable_numa": false 00:04:08.930 } 00:04:08.930 } 00:04:08.930 ] 00:04:08.930 }, 00:04:08.930 { 00:04:08.930 "subsystem": "sock", 00:04:08.930 "config": [ 00:04:08.930 { 00:04:08.930 "method": "sock_set_default_impl", 00:04:08.930 "params": { 00:04:08.930 "impl_name": "posix" 00:04:08.930 } 00:04:08.930 }, 00:04:08.930 { 00:04:08.930 "method": "sock_impl_set_options", 00:04:08.930 "params": { 00:04:08.930 "impl_name": "ssl", 00:04:08.930 "recv_buf_size": 4096, 00:04:08.930 "send_buf_size": 4096, 00:04:08.930 "enable_recv_pipe": true, 00:04:08.930 "enable_quickack": false, 00:04:08.930 "enable_placement_id": 0, 00:04:08.930 "enable_zerocopy_send_server": true, 00:04:08.930 "enable_zerocopy_send_client": false, 00:04:08.930 "zerocopy_threshold": 0, 00:04:08.930 "tls_version": 0, 00:04:08.930 "enable_ktls": false 00:04:08.930 } 00:04:08.930 }, 00:04:08.930 { 00:04:08.930 "method": "sock_impl_set_options", 00:04:08.930 "params": { 00:04:08.930 "impl_name": "posix", 00:04:08.930 "recv_buf_size": 2097152, 00:04:08.930 "send_buf_size": 2097152, 00:04:08.930 "enable_recv_pipe": true, 00:04:08.930 "enable_quickack": false, 00:04:08.930 "enable_placement_id": 0, 00:04:08.930 "enable_zerocopy_send_server": true, 00:04:08.930 "enable_zerocopy_send_client": false, 00:04:08.930 "zerocopy_threshold": 0, 00:04:08.930 "tls_version": 0, 00:04:08.930 "enable_ktls": false 00:04:08.930 } 00:04:08.930 } 00:04:08.930 ] 00:04:08.930 }, 00:04:08.930 { 00:04:08.930 "subsystem": "vmd", 00:04:08.930 "config": [] 00:04:08.930 }, 00:04:08.930 { 00:04:08.930 "subsystem": "accel", 00:04:08.930 "config": [ 00:04:08.930 { 00:04:08.930 "method": "accel_set_options", 00:04:08.930 "params": { 00:04:08.930 "small_cache_size": 128, 00:04:08.930 "large_cache_size": 16, 00:04:08.930 "task_count": 2048, 00:04:08.930 "sequence_count": 2048, 00:04:08.930 "buf_count": 2048 00:04:08.930 } 00:04:08.930 } 00:04:08.930 ] 00:04:08.930 }, 00:04:08.930 { 00:04:08.930 "subsystem": "bdev", 00:04:08.930 "config": [ 00:04:08.930 { 00:04:08.930 "method": "bdev_set_options", 00:04:08.930 "params": { 00:04:08.930 "bdev_io_pool_size": 65535, 00:04:08.930 "bdev_io_cache_size": 256, 00:04:08.930 "bdev_auto_examine": true, 00:04:08.930 "iobuf_small_cache_size": 128, 00:04:08.930 "iobuf_large_cache_size": 16 00:04:08.930 } 00:04:08.930 }, 00:04:08.930 { 00:04:08.930 "method": "bdev_raid_set_options", 00:04:08.930 "params": { 00:04:08.930 "process_window_size_kb": 1024, 00:04:08.930 "process_max_bandwidth_mb_sec": 0 00:04:08.930 } 00:04:08.930 }, 00:04:08.930 { 00:04:08.930 "method": "bdev_iscsi_set_options", 00:04:08.930 "params": { 00:04:08.930 "timeout_sec": 30 00:04:08.930 } 00:04:08.930 }, 00:04:08.930 { 00:04:08.930 "method": "bdev_nvme_set_options", 00:04:08.930 "params": { 00:04:08.930 "action_on_timeout": "none", 00:04:08.930 "timeout_us": 0, 00:04:08.930 "timeout_admin_us": 0, 00:04:08.930 "keep_alive_timeout_ms": 10000, 00:04:08.930 "arbitration_burst": 0, 00:04:08.930 "low_priority_weight": 0, 00:04:08.930 "medium_priority_weight": 0, 00:04:08.930 "high_priority_weight": 0, 00:04:08.930 "nvme_adminq_poll_period_us": 10000, 00:04:08.930 "nvme_ioq_poll_period_us": 0, 00:04:08.930 "io_queue_requests": 0, 00:04:08.930 "delay_cmd_submit": true, 00:04:08.930 "transport_retry_count": 4, 00:04:08.930 "bdev_retry_count": 3, 00:04:08.930 "transport_ack_timeout": 0, 00:04:08.930 "ctrlr_loss_timeout_sec": 0, 00:04:08.930 "reconnect_delay_sec": 0, 00:04:08.930 "fast_io_fail_timeout_sec": 0, 00:04:08.930 "disable_auto_failback": false, 00:04:08.930 "generate_uuids": false, 00:04:08.930 "transport_tos": 0, 00:04:08.930 "nvme_error_stat": false, 00:04:08.930 "rdma_srq_size": 0, 00:04:08.930 "io_path_stat": false, 00:04:08.930 "allow_accel_sequence": false, 00:04:08.930 "rdma_max_cq_size": 0, 00:04:08.930 "rdma_cm_event_timeout_ms": 0, 00:04:08.930 "dhchap_digests": [ 00:04:08.931 "sha256", 00:04:08.931 "sha384", 00:04:08.931 "sha512" 00:04:08.931 ], 00:04:08.931 "dhchap_dhgroups": [ 00:04:08.931 "null", 00:04:08.931 "ffdhe2048", 00:04:08.931 "ffdhe3072", 00:04:08.931 "ffdhe4096", 00:04:08.931 "ffdhe6144", 00:04:08.931 "ffdhe8192" 00:04:08.931 ], 00:04:08.931 "rdma_umr_per_io": false 00:04:08.931 } 00:04:08.931 }, 00:04:08.931 { 00:04:08.931 "method": "bdev_nvme_set_hotplug", 00:04:08.931 "params": { 00:04:08.931 "period_us": 100000, 00:04:08.931 "enable": false 00:04:08.931 } 00:04:08.931 }, 00:04:08.931 { 00:04:08.931 "method": "bdev_wait_for_examine" 00:04:08.931 } 00:04:08.931 ] 00:04:08.931 }, 00:04:08.931 { 00:04:08.931 "subsystem": "scsi", 00:04:08.931 "config": null 00:04:08.931 }, 00:04:08.931 { 00:04:08.931 "subsystem": "scheduler", 00:04:08.931 "config": [ 00:04:08.931 { 00:04:08.931 "method": "framework_set_scheduler", 00:04:08.931 "params": { 00:04:08.931 "name": "static" 00:04:08.931 } 00:04:08.931 } 00:04:08.931 ] 00:04:08.931 }, 00:04:08.931 { 00:04:08.931 "subsystem": "vhost_scsi", 00:04:08.931 "config": [] 00:04:08.931 }, 00:04:08.931 { 00:04:08.931 "subsystem": "vhost_blk", 00:04:08.931 "config": [] 00:04:08.931 }, 00:04:08.931 { 00:04:08.931 "subsystem": "ublk", 00:04:08.931 "config": [] 00:04:08.931 }, 00:04:08.931 { 00:04:08.931 "subsystem": "nbd", 00:04:08.931 "config": [] 00:04:08.931 }, 00:04:08.931 { 00:04:08.931 "subsystem": "nvmf", 00:04:08.931 "config": [ 00:04:08.931 { 00:04:08.931 "method": "nvmf_set_config", 00:04:08.931 "params": { 00:04:08.931 "discovery_filter": "match_any", 00:04:08.931 "admin_cmd_passthru": { 00:04:08.931 "identify_ctrlr": false 00:04:08.931 }, 00:04:08.931 "dhchap_digests": [ 00:04:08.931 "sha256", 00:04:08.931 "sha384", 00:04:08.931 "sha512" 00:04:08.931 ], 00:04:08.931 "dhchap_dhgroups": [ 00:04:08.931 "null", 00:04:08.931 "ffdhe2048", 00:04:08.931 "ffdhe3072", 00:04:08.931 "ffdhe4096", 00:04:08.931 "ffdhe6144", 00:04:08.931 "ffdhe8192" 00:04:08.931 ] 00:04:08.931 } 00:04:08.931 }, 00:04:08.931 { 00:04:08.931 "method": "nvmf_set_max_subsystems", 00:04:08.931 "params": { 00:04:08.931 "max_subsystems": 1024 00:04:08.931 } 00:04:08.931 }, 00:04:08.931 { 00:04:08.931 "method": "nvmf_set_crdt", 00:04:08.931 "params": { 00:04:08.931 "crdt1": 0, 00:04:08.931 "crdt2": 0, 00:04:08.931 "crdt3": 0 00:04:08.931 } 00:04:08.931 }, 00:04:08.931 { 00:04:08.931 "method": "nvmf_create_transport", 00:04:08.931 "params": { 00:04:08.931 "trtype": "TCP", 00:04:08.931 "max_queue_depth": 128, 00:04:08.931 "max_io_qpairs_per_ctrlr": 127, 00:04:08.931 "in_capsule_data_size": 4096, 00:04:08.931 "max_io_size": 131072, 00:04:08.931 "io_unit_size": 131072, 00:04:08.931 "max_aq_depth": 128, 00:04:08.931 "num_shared_buffers": 511, 00:04:08.931 "buf_cache_size": 4294967295, 00:04:08.931 "dif_insert_or_strip": false, 00:04:08.931 "zcopy": false, 00:04:08.931 "c2h_success": true, 00:04:08.931 "sock_priority": 0, 00:04:08.931 "abort_timeout_sec": 1, 00:04:08.931 "ack_timeout": 0, 00:04:08.931 "data_wr_pool_size": 0 00:04:08.931 } 00:04:08.931 } 00:04:08.931 ] 00:04:08.931 }, 00:04:08.931 { 00:04:08.931 "subsystem": "iscsi", 00:04:08.931 "config": [ 00:04:08.931 { 00:04:08.931 "method": "iscsi_set_options", 00:04:08.931 "params": { 00:04:08.931 "node_base": "iqn.2016-06.io.spdk", 00:04:08.931 "max_sessions": 128, 00:04:08.931 "max_connections_per_session": 2, 00:04:08.931 "max_queue_depth": 64, 00:04:08.931 "default_time2wait": 2, 00:04:08.931 "default_time2retain": 20, 00:04:08.931 "first_burst_length": 8192, 00:04:08.931 "immediate_data": true, 00:04:08.931 "allow_duplicated_isid": false, 00:04:08.931 "error_recovery_level": 0, 00:04:08.931 "nop_timeout": 60, 00:04:08.931 "nop_in_interval": 30, 00:04:08.931 "disable_chap": false, 00:04:08.931 "require_chap": false, 00:04:08.931 "mutual_chap": false, 00:04:08.931 "chap_group": 0, 00:04:08.931 "max_large_datain_per_connection": 64, 00:04:08.931 "max_r2t_per_connection": 4, 00:04:08.931 "pdu_pool_size": 36864, 00:04:08.931 "immediate_data_pool_size": 16384, 00:04:08.931 "data_out_pool_size": 2048 00:04:08.931 } 00:04:08.931 } 00:04:08.931 ] 00:04:08.931 } 00:04:08.931 ] 00:04:08.931 } 00:04:08.931 14:40:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:08.931 14:40:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 543689 00:04:08.931 14:40:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 543689 ']' 00:04:08.931 14:40:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 543689 00:04:08.931 14:40:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:08.931 14:40:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:08.931 14:40:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 543689 00:04:08.931 14:40:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:08.931 14:40:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:08.931 14:40:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 543689' 00:04:08.931 killing process with pid 543689 00:04:08.931 14:40:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 543689 00:04:08.931 14:40:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 543689 00:04:09.189 14:40:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=543830 00:04:09.189 14:40:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:09.189 14:40:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:14.442 14:40:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 543830 00:04:14.442 14:40:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 543830 ']' 00:04:14.442 14:40:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 543830 00:04:14.442 14:40:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:14.442 14:40:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:14.442 14:40:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 543830 00:04:14.442 14:40:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:14.442 14:40:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:14.442 14:40:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 543830' 00:04:14.442 killing process with pid 543830 00:04:14.442 14:40:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 543830 00:04:14.442 14:40:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 543830 00:04:14.699 14:40:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:14.699 14:40:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:14.699 00:04:14.699 real 0m6.521s 00:04:14.699 user 0m6.138s 00:04:14.699 sys 0m0.684s 00:04:14.699 14:40:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:14.699 14:40:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:14.699 ************************************ 00:04:14.699 END TEST skip_rpc_with_json 00:04:14.699 ************************************ 00:04:14.699 14:40:57 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:14.699 14:40:57 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:14.699 14:40:57 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:14.699 14:40:57 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:14.699 ************************************ 00:04:14.699 START TEST skip_rpc_with_delay 00:04:14.699 ************************************ 00:04:14.699 14:40:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:14.699 14:40:57 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:14.699 14:40:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:14.699 14:40:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:14.699 14:40:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:14.699 14:40:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:14.699 14:40:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:14.699 14:40:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:14.699 14:40:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:14.699 14:40:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:14.699 14:40:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:14.699 14:40:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:14.699 14:40:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:14.956 [2024-12-11 14:40:57.491580] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:14.956 14:40:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:14.956 14:40:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:14.956 14:40:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:14.956 14:40:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:14.956 00:04:14.956 real 0m0.074s 00:04:14.956 user 0m0.049s 00:04:14.956 sys 0m0.025s 00:04:14.956 14:40:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:14.956 14:40:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:14.956 ************************************ 00:04:14.956 END TEST skip_rpc_with_delay 00:04:14.956 ************************************ 00:04:14.956 14:40:57 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:14.956 14:40:57 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:14.956 14:40:57 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:14.956 14:40:57 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:14.956 14:40:57 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:14.956 14:40:57 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:14.956 ************************************ 00:04:14.956 START TEST exit_on_failed_rpc_init 00:04:14.956 ************************************ 00:04:14.956 14:40:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:14.956 14:40:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=544538 00:04:14.956 14:40:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:14.956 14:40:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 544538 00:04:14.956 14:40:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 544538 ']' 00:04:14.956 14:40:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:14.956 14:40:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:14.956 14:40:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:14.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:14.956 14:40:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:14.956 14:40:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:14.956 [2024-12-11 14:40:57.616001] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:04:14.956 [2024-12-11 14:40:57.616099] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid544538 ] 00:04:14.956 [2024-12-11 14:40:57.686485] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:15.214 [2024-12-11 14:40:57.746659] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:15.473 14:40:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:15.473 14:40:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:15.473 14:40:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:15.473 14:40:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:15.473 14:40:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:15.473 14:40:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:15.473 14:40:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:15.473 14:40:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:15.473 14:40:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:15.473 14:40:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:15.473 14:40:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:15.473 14:40:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:15.473 14:40:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:15.473 14:40:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:15.473 14:40:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:15.473 [2024-12-11 14:40:58.073590] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:04:15.473 [2024-12-11 14:40:58.073695] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid544618 ] 00:04:15.473 [2024-12-11 14:40:58.139310] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:15.473 [2024-12-11 14:40:58.197272] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:04:15.473 [2024-12-11 14:40:58.197409] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:15.473 [2024-12-11 14:40:58.197428] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:15.473 [2024-12-11 14:40:58.197440] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:15.731 14:40:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:15.731 14:40:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:15.731 14:40:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:15.731 14:40:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:15.731 14:40:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:15.731 14:40:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:15.731 14:40:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:15.731 14:40:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 544538 00:04:15.731 14:40:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 544538 ']' 00:04:15.731 14:40:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 544538 00:04:15.731 14:40:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:15.731 14:40:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:15.731 14:40:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 544538 00:04:15.731 14:40:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:15.731 14:40:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:15.731 14:40:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 544538' 00:04:15.731 killing process with pid 544538 00:04:15.731 14:40:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 544538 00:04:15.731 14:40:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 544538 00:04:15.989 00:04:15.989 real 0m1.173s 00:04:15.989 user 0m1.290s 00:04:15.989 sys 0m0.435s 00:04:15.989 14:40:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:15.989 14:40:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:15.989 ************************************ 00:04:15.989 END TEST exit_on_failed_rpc_init 00:04:15.989 ************************************ 00:04:15.989 14:40:58 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:15.989 00:04:15.989 real 0m13.557s 00:04:15.989 user 0m12.786s 00:04:15.989 sys 0m1.649s 00:04:15.989 14:40:58 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:15.989 14:40:58 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:15.989 ************************************ 00:04:15.989 END TEST skip_rpc 00:04:15.989 ************************************ 00:04:16.248 14:40:58 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:16.248 14:40:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:16.248 14:40:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:16.248 14:40:58 -- common/autotest_common.sh@10 -- # set +x 00:04:16.248 ************************************ 00:04:16.248 START TEST rpc_client 00:04:16.248 ************************************ 00:04:16.248 14:40:58 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:16.248 * Looking for test storage... 00:04:16.248 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:16.248 14:40:58 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:16.248 14:40:58 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:04:16.248 14:40:58 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:16.248 14:40:58 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:16.248 14:40:58 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:16.248 14:40:58 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:16.248 14:40:58 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:16.248 14:40:58 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:16.248 14:40:58 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:16.248 14:40:58 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:16.248 14:40:58 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:16.248 14:40:58 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:16.248 14:40:58 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:16.248 14:40:58 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:16.248 14:40:58 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:16.248 14:40:58 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:16.248 14:40:58 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:16.248 14:40:58 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:16.248 14:40:58 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:16.248 14:40:58 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:16.248 14:40:58 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:16.248 14:40:58 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:16.248 14:40:58 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:16.248 14:40:58 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:16.248 14:40:58 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:16.248 14:40:58 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:16.248 14:40:58 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:16.248 14:40:58 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:16.248 14:40:58 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:16.248 14:40:58 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:16.248 14:40:58 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:16.248 14:40:58 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:16.248 14:40:58 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:16.248 14:40:58 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:16.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.248 --rc genhtml_branch_coverage=1 00:04:16.248 --rc genhtml_function_coverage=1 00:04:16.248 --rc genhtml_legend=1 00:04:16.248 --rc geninfo_all_blocks=1 00:04:16.248 --rc geninfo_unexecuted_blocks=1 00:04:16.248 00:04:16.248 ' 00:04:16.248 14:40:58 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:16.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.248 --rc genhtml_branch_coverage=1 00:04:16.248 --rc genhtml_function_coverage=1 00:04:16.248 --rc genhtml_legend=1 00:04:16.248 --rc geninfo_all_blocks=1 00:04:16.248 --rc geninfo_unexecuted_blocks=1 00:04:16.248 00:04:16.248 ' 00:04:16.248 14:40:58 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:16.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.248 --rc genhtml_branch_coverage=1 00:04:16.248 --rc genhtml_function_coverage=1 00:04:16.248 --rc genhtml_legend=1 00:04:16.248 --rc geninfo_all_blocks=1 00:04:16.248 --rc geninfo_unexecuted_blocks=1 00:04:16.248 00:04:16.248 ' 00:04:16.248 14:40:58 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:16.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.248 --rc genhtml_branch_coverage=1 00:04:16.248 --rc genhtml_function_coverage=1 00:04:16.248 --rc genhtml_legend=1 00:04:16.248 --rc geninfo_all_blocks=1 00:04:16.248 --rc geninfo_unexecuted_blocks=1 00:04:16.248 00:04:16.248 ' 00:04:16.248 14:40:58 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:16.248 OK 00:04:16.248 14:40:58 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:16.248 00:04:16.248 real 0m0.156s 00:04:16.248 user 0m0.099s 00:04:16.248 sys 0m0.065s 00:04:16.248 14:40:58 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:16.248 14:40:58 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:16.248 ************************************ 00:04:16.248 END TEST rpc_client 00:04:16.248 ************************************ 00:04:16.248 14:40:58 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:16.248 14:40:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:16.248 14:40:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:16.248 14:40:58 -- common/autotest_common.sh@10 -- # set +x 00:04:16.248 ************************************ 00:04:16.248 START TEST json_config 00:04:16.248 ************************************ 00:04:16.248 14:40:59 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:16.505 14:40:59 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:16.505 14:40:59 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:04:16.505 14:40:59 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:16.505 14:40:59 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:16.505 14:40:59 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:16.505 14:40:59 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:16.505 14:40:59 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:16.505 14:40:59 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:16.505 14:40:59 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:16.505 14:40:59 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:16.505 14:40:59 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:16.506 14:40:59 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:16.506 14:40:59 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:16.506 14:40:59 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:16.506 14:40:59 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:16.506 14:40:59 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:16.506 14:40:59 json_config -- scripts/common.sh@345 -- # : 1 00:04:16.506 14:40:59 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:16.506 14:40:59 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:16.506 14:40:59 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:16.506 14:40:59 json_config -- scripts/common.sh@353 -- # local d=1 00:04:16.506 14:40:59 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:16.506 14:40:59 json_config -- scripts/common.sh@355 -- # echo 1 00:04:16.506 14:40:59 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:16.506 14:40:59 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:16.506 14:40:59 json_config -- scripts/common.sh@353 -- # local d=2 00:04:16.506 14:40:59 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:16.506 14:40:59 json_config -- scripts/common.sh@355 -- # echo 2 00:04:16.506 14:40:59 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:16.506 14:40:59 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:16.506 14:40:59 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:16.506 14:40:59 json_config -- scripts/common.sh@368 -- # return 0 00:04:16.506 14:40:59 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:16.506 14:40:59 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:16.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.506 --rc genhtml_branch_coverage=1 00:04:16.506 --rc genhtml_function_coverage=1 00:04:16.506 --rc genhtml_legend=1 00:04:16.506 --rc geninfo_all_blocks=1 00:04:16.506 --rc geninfo_unexecuted_blocks=1 00:04:16.506 00:04:16.506 ' 00:04:16.506 14:40:59 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:16.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.506 --rc genhtml_branch_coverage=1 00:04:16.506 --rc genhtml_function_coverage=1 00:04:16.506 --rc genhtml_legend=1 00:04:16.506 --rc geninfo_all_blocks=1 00:04:16.506 --rc geninfo_unexecuted_blocks=1 00:04:16.506 00:04:16.506 ' 00:04:16.506 14:40:59 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:16.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.506 --rc genhtml_branch_coverage=1 00:04:16.506 --rc genhtml_function_coverage=1 00:04:16.506 --rc genhtml_legend=1 00:04:16.506 --rc geninfo_all_blocks=1 00:04:16.506 --rc geninfo_unexecuted_blocks=1 00:04:16.506 00:04:16.506 ' 00:04:16.506 14:40:59 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:16.506 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.506 --rc genhtml_branch_coverage=1 00:04:16.506 --rc genhtml_function_coverage=1 00:04:16.506 --rc genhtml_legend=1 00:04:16.506 --rc geninfo_all_blocks=1 00:04:16.506 --rc geninfo_unexecuted_blocks=1 00:04:16.506 00:04:16.506 ' 00:04:16.506 14:40:59 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:16.506 14:40:59 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:16.506 14:40:59 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:16.506 14:40:59 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:16.506 14:40:59 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:16.506 14:40:59 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:16.506 14:40:59 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:16.506 14:40:59 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:16.506 14:40:59 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:16.506 14:40:59 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:16.506 14:40:59 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:16.506 14:40:59 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:16.506 14:40:59 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:04:16.506 14:40:59 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:04:16.506 14:40:59 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:16.506 14:40:59 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:16.506 14:40:59 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:16.506 14:40:59 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:16.506 14:40:59 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:16.506 14:40:59 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:16.506 14:40:59 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:16.506 14:40:59 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:16.506 14:40:59 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:16.506 14:40:59 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:16.506 14:40:59 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:16.506 14:40:59 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:16.506 14:40:59 json_config -- paths/export.sh@5 -- # export PATH 00:04:16.506 14:40:59 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:16.506 14:40:59 json_config -- nvmf/common.sh@51 -- # : 0 00:04:16.506 14:40:59 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:16.506 14:40:59 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:16.506 14:40:59 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:16.506 14:40:59 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:16.506 14:40:59 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:16.506 14:40:59 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:16.506 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:16.506 14:40:59 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:16.506 14:40:59 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:16.506 14:40:59 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:16.506 14:40:59 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:16.506 14:40:59 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:16.506 14:40:59 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:16.506 14:40:59 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:16.506 14:40:59 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:16.506 14:40:59 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:16.506 14:40:59 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:16.506 14:40:59 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:16.506 14:40:59 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:16.506 14:40:59 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:16.506 14:40:59 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:16.506 14:40:59 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:16.506 14:40:59 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:16.506 14:40:59 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:16.506 14:40:59 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:16.506 14:40:59 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:16.506 INFO: JSON configuration test init 00:04:16.506 14:40:59 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:16.506 14:40:59 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:16.506 14:40:59 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:16.506 14:40:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:16.506 14:40:59 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:16.506 14:40:59 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:16.506 14:40:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:16.506 14:40:59 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:16.506 14:40:59 json_config -- json_config/common.sh@9 -- # local app=target 00:04:16.506 14:40:59 json_config -- json_config/common.sh@10 -- # shift 00:04:16.506 14:40:59 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:16.506 14:40:59 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:16.506 14:40:59 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:16.506 14:40:59 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:16.506 14:40:59 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:16.506 14:40:59 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=544892 00:04:16.506 14:40:59 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:16.506 14:40:59 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:16.506 Waiting for target to run... 00:04:16.506 14:40:59 json_config -- json_config/common.sh@25 -- # waitforlisten 544892 /var/tmp/spdk_tgt.sock 00:04:16.506 14:40:59 json_config -- common/autotest_common.sh@835 -- # '[' -z 544892 ']' 00:04:16.506 14:40:59 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:16.506 14:40:59 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:16.506 14:40:59 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:16.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:16.506 14:40:59 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:16.506 14:40:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:16.506 [2024-12-11 14:40:59.219108] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:04:16.506 [2024-12-11 14:40:59.219189] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid544892 ] 00:04:17.072 [2024-12-11 14:40:59.724669] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:17.072 [2024-12-11 14:40:59.775731] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:17.638 14:41:00 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:17.638 14:41:00 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:17.638 14:41:00 json_config -- json_config/common.sh@26 -- # echo '' 00:04:17.638 00:04:17.638 14:41:00 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:17.638 14:41:00 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:17.638 14:41:00 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:17.638 14:41:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:17.638 14:41:00 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:17.638 14:41:00 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:17.638 14:41:00 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:17.638 14:41:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:17.638 14:41:00 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:17.638 14:41:00 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:17.638 14:41:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:20.920 14:41:03 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:20.920 14:41:03 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:20.920 14:41:03 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:20.920 14:41:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:20.920 14:41:03 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:20.920 14:41:03 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:20.920 14:41:03 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:20.920 14:41:03 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:20.920 14:41:03 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:20.920 14:41:03 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:20.920 14:41:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:20.920 14:41:03 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:21.178 14:41:03 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:21.178 14:41:03 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:21.178 14:41:03 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:21.178 14:41:03 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:21.178 14:41:03 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:21.178 14:41:03 json_config -- json_config/json_config.sh@54 -- # sort 00:04:21.178 14:41:03 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:21.178 14:41:03 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:21.178 14:41:03 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:21.178 14:41:03 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:21.178 14:41:03 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:21.178 14:41:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:21.178 14:41:03 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:21.178 14:41:03 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:21.178 14:41:03 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:21.178 14:41:03 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:21.178 14:41:03 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:21.178 14:41:03 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:21.178 14:41:03 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:21.178 14:41:03 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:21.178 14:41:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:21.178 14:41:03 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:21.178 14:41:03 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:21.178 14:41:03 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:21.178 14:41:03 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:21.178 14:41:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:21.436 MallocForNvmf0 00:04:21.436 14:41:04 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:21.436 14:41:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:21.692 MallocForNvmf1 00:04:21.693 14:41:04 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:21.693 14:41:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:21.950 [2024-12-11 14:41:04.502019] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:21.950 14:41:04 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:21.950 14:41:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:22.207 14:41:04 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:22.207 14:41:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:22.465 14:41:05 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:22.465 14:41:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:22.722 14:41:05 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:22.722 14:41:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:22.979 [2024-12-11 14:41:05.561400] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:22.979 14:41:05 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:22.979 14:41:05 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:22.979 14:41:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:22.979 14:41:05 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:22.979 14:41:05 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:22.979 14:41:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:22.979 14:41:05 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:22.979 14:41:05 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:22.979 14:41:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:23.236 MallocBdevForConfigChangeCheck 00:04:23.236 14:41:05 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:23.237 14:41:05 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:23.237 14:41:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:23.237 14:41:05 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:23.237 14:41:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:23.801 14:41:06 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:23.801 INFO: shutting down applications... 00:04:23.801 14:41:06 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:23.801 14:41:06 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:23.801 14:41:06 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:23.801 14:41:06 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:25.697 Calling clear_iscsi_subsystem 00:04:25.697 Calling clear_nvmf_subsystem 00:04:25.697 Calling clear_nbd_subsystem 00:04:25.697 Calling clear_ublk_subsystem 00:04:25.697 Calling clear_vhost_blk_subsystem 00:04:25.697 Calling clear_vhost_scsi_subsystem 00:04:25.697 Calling clear_bdev_subsystem 00:04:25.697 14:41:07 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:25.697 14:41:07 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:25.697 14:41:07 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:25.697 14:41:07 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:25.697 14:41:07 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:25.697 14:41:07 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:25.697 14:41:08 json_config -- json_config/json_config.sh@352 -- # break 00:04:25.697 14:41:08 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:25.697 14:41:08 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:25.697 14:41:08 json_config -- json_config/common.sh@31 -- # local app=target 00:04:25.697 14:41:08 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:25.697 14:41:08 json_config -- json_config/common.sh@35 -- # [[ -n 544892 ]] 00:04:25.697 14:41:08 json_config -- json_config/common.sh@38 -- # kill -SIGINT 544892 00:04:25.697 14:41:08 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:25.697 14:41:08 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:25.697 14:41:08 json_config -- json_config/common.sh@41 -- # kill -0 544892 00:04:25.697 14:41:08 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:26.265 14:41:08 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:26.265 14:41:08 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:26.265 14:41:08 json_config -- json_config/common.sh@41 -- # kill -0 544892 00:04:26.265 14:41:08 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:26.265 14:41:08 json_config -- json_config/common.sh@43 -- # break 00:04:26.265 14:41:08 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:26.265 14:41:08 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:26.265 SPDK target shutdown done 00:04:26.265 14:41:08 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:26.265 INFO: relaunching applications... 00:04:26.265 14:41:08 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:26.265 14:41:08 json_config -- json_config/common.sh@9 -- # local app=target 00:04:26.265 14:41:08 json_config -- json_config/common.sh@10 -- # shift 00:04:26.265 14:41:08 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:26.265 14:41:08 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:26.265 14:41:08 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:26.265 14:41:08 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:26.265 14:41:08 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:26.265 14:41:08 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=546128 00:04:26.265 14:41:08 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:26.265 14:41:08 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:26.265 Waiting for target to run... 00:04:26.265 14:41:08 json_config -- json_config/common.sh@25 -- # waitforlisten 546128 /var/tmp/spdk_tgt.sock 00:04:26.265 14:41:08 json_config -- common/autotest_common.sh@835 -- # '[' -z 546128 ']' 00:04:26.265 14:41:08 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:26.265 14:41:08 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:26.265 14:41:08 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:26.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:26.265 14:41:08 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:26.265 14:41:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:26.265 [2024-12-11 14:41:08.948945] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:04:26.265 [2024-12-11 14:41:08.949046] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid546128 ] 00:04:26.832 [2024-12-11 14:41:09.319288] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:26.832 [2024-12-11 14:41:09.362354] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.115 [2024-12-11 14:41:12.413828] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:30.115 [2024-12-11 14:41:12.446270] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:30.115 14:41:12 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:30.115 14:41:12 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:30.115 14:41:12 json_config -- json_config/common.sh@26 -- # echo '' 00:04:30.115 00:04:30.115 14:41:12 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:30.115 14:41:12 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:30.115 INFO: Checking if target configuration is the same... 00:04:30.115 14:41:12 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:30.115 14:41:12 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:30.115 14:41:12 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:30.115 + '[' 2 -ne 2 ']' 00:04:30.115 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:30.115 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:30.115 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:30.115 +++ basename /dev/fd/62 00:04:30.115 ++ mktemp /tmp/62.XXX 00:04:30.115 + tmp_file_1=/tmp/62.vX9 00:04:30.115 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:30.116 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:30.116 + tmp_file_2=/tmp/spdk_tgt_config.json.npR 00:04:30.116 + ret=0 00:04:30.116 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:30.374 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:30.374 + diff -u /tmp/62.vX9 /tmp/spdk_tgt_config.json.npR 00:04:30.374 + echo 'INFO: JSON config files are the same' 00:04:30.374 INFO: JSON config files are the same 00:04:30.374 + rm /tmp/62.vX9 /tmp/spdk_tgt_config.json.npR 00:04:30.374 + exit 0 00:04:30.374 14:41:12 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:30.374 14:41:12 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:30.374 INFO: changing configuration and checking if this can be detected... 00:04:30.374 14:41:12 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:30.374 14:41:12 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:30.633 14:41:13 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:30.633 14:41:13 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:30.633 14:41:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:30.633 + '[' 2 -ne 2 ']' 00:04:30.633 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:30.633 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:30.633 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:30.633 +++ basename /dev/fd/62 00:04:30.633 ++ mktemp /tmp/62.XXX 00:04:30.633 + tmp_file_1=/tmp/62.BVA 00:04:30.633 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:30.633 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:30.633 + tmp_file_2=/tmp/spdk_tgt_config.json.2X9 00:04:30.633 + ret=0 00:04:30.633 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:30.890 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:31.148 + diff -u /tmp/62.BVA /tmp/spdk_tgt_config.json.2X9 00:04:31.148 + ret=1 00:04:31.148 + echo '=== Start of file: /tmp/62.BVA ===' 00:04:31.148 + cat /tmp/62.BVA 00:04:31.148 + echo '=== End of file: /tmp/62.BVA ===' 00:04:31.148 + echo '' 00:04:31.148 + echo '=== Start of file: /tmp/spdk_tgt_config.json.2X9 ===' 00:04:31.148 + cat /tmp/spdk_tgt_config.json.2X9 00:04:31.148 + echo '=== End of file: /tmp/spdk_tgt_config.json.2X9 ===' 00:04:31.148 + echo '' 00:04:31.148 + rm /tmp/62.BVA /tmp/spdk_tgt_config.json.2X9 00:04:31.148 + exit 1 00:04:31.148 14:41:13 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:31.148 INFO: configuration change detected. 00:04:31.148 14:41:13 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:31.148 14:41:13 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:31.148 14:41:13 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:31.148 14:41:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:31.148 14:41:13 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:31.148 14:41:13 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:31.148 14:41:13 json_config -- json_config/json_config.sh@324 -- # [[ -n 546128 ]] 00:04:31.148 14:41:13 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:31.148 14:41:13 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:31.148 14:41:13 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:31.148 14:41:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:31.148 14:41:13 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:31.148 14:41:13 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:31.148 14:41:13 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:31.148 14:41:13 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:31.148 14:41:13 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:31.148 14:41:13 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:31.148 14:41:13 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:31.148 14:41:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:31.148 14:41:13 json_config -- json_config/json_config.sh@330 -- # killprocess 546128 00:04:31.148 14:41:13 json_config -- common/autotest_common.sh@954 -- # '[' -z 546128 ']' 00:04:31.148 14:41:13 json_config -- common/autotest_common.sh@958 -- # kill -0 546128 00:04:31.148 14:41:13 json_config -- common/autotest_common.sh@959 -- # uname 00:04:31.148 14:41:13 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:31.148 14:41:13 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 546128 00:04:31.148 14:41:13 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:31.148 14:41:13 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:31.148 14:41:13 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 546128' 00:04:31.148 killing process with pid 546128 00:04:31.148 14:41:13 json_config -- common/autotest_common.sh@973 -- # kill 546128 00:04:31.148 14:41:13 json_config -- common/autotest_common.sh@978 -- # wait 546128 00:04:32.623 14:41:15 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:32.623 14:41:15 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:32.623 14:41:15 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:32.623 14:41:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:32.882 14:41:15 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:32.882 14:41:15 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:32.882 INFO: Success 00:04:32.882 00:04:32.882 real 0m16.388s 00:04:32.882 user 0m18.016s 00:04:32.882 sys 0m2.580s 00:04:32.882 14:41:15 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:32.882 14:41:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:32.882 ************************************ 00:04:32.882 END TEST json_config 00:04:32.882 ************************************ 00:04:32.882 14:41:15 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:32.882 14:41:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:32.882 14:41:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:32.882 14:41:15 -- common/autotest_common.sh@10 -- # set +x 00:04:32.882 ************************************ 00:04:32.882 START TEST json_config_extra_key 00:04:32.882 ************************************ 00:04:32.882 14:41:15 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:32.882 14:41:15 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:32.882 14:41:15 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:04:32.882 14:41:15 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:32.882 14:41:15 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:32.882 14:41:15 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:32.882 14:41:15 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:32.882 14:41:15 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:32.882 14:41:15 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:32.882 14:41:15 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:32.882 14:41:15 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:32.882 14:41:15 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:32.883 14:41:15 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:32.883 14:41:15 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:32.883 14:41:15 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:32.883 14:41:15 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:32.883 14:41:15 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:32.883 14:41:15 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:32.883 14:41:15 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:32.883 14:41:15 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:32.883 14:41:15 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:32.883 14:41:15 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:32.883 14:41:15 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:32.883 14:41:15 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:32.883 14:41:15 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:32.883 14:41:15 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:32.883 14:41:15 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:32.883 14:41:15 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:32.883 14:41:15 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:32.883 14:41:15 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:32.883 14:41:15 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:32.883 14:41:15 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:32.883 14:41:15 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:32.883 14:41:15 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:32.883 14:41:15 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:32.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.883 --rc genhtml_branch_coverage=1 00:04:32.883 --rc genhtml_function_coverage=1 00:04:32.883 --rc genhtml_legend=1 00:04:32.883 --rc geninfo_all_blocks=1 00:04:32.883 --rc geninfo_unexecuted_blocks=1 00:04:32.883 00:04:32.883 ' 00:04:32.883 14:41:15 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:32.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.883 --rc genhtml_branch_coverage=1 00:04:32.883 --rc genhtml_function_coverage=1 00:04:32.883 --rc genhtml_legend=1 00:04:32.883 --rc geninfo_all_blocks=1 00:04:32.883 --rc geninfo_unexecuted_blocks=1 00:04:32.883 00:04:32.883 ' 00:04:32.883 14:41:15 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:32.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.883 --rc genhtml_branch_coverage=1 00:04:32.883 --rc genhtml_function_coverage=1 00:04:32.883 --rc genhtml_legend=1 00:04:32.883 --rc geninfo_all_blocks=1 00:04:32.883 --rc geninfo_unexecuted_blocks=1 00:04:32.883 00:04:32.883 ' 00:04:32.883 14:41:15 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:32.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.883 --rc genhtml_branch_coverage=1 00:04:32.883 --rc genhtml_function_coverage=1 00:04:32.883 --rc genhtml_legend=1 00:04:32.883 --rc geninfo_all_blocks=1 00:04:32.883 --rc geninfo_unexecuted_blocks=1 00:04:32.883 00:04:32.883 ' 00:04:32.883 14:41:15 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:32.883 14:41:15 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:32.883 14:41:15 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:32.883 14:41:15 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:32.883 14:41:15 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:32.883 14:41:15 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:32.883 14:41:15 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:32.883 14:41:15 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:32.883 14:41:15 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:32.883 14:41:15 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:32.883 14:41:15 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:32.883 14:41:15 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:32.883 14:41:15 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:04:32.883 14:41:15 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:04:32.883 14:41:15 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:32.883 14:41:15 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:32.883 14:41:15 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:32.883 14:41:15 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:32.883 14:41:15 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:32.883 14:41:15 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:32.883 14:41:15 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:32.883 14:41:15 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:32.883 14:41:15 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:32.883 14:41:15 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:32.883 14:41:15 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:32.883 14:41:15 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:32.883 14:41:15 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:32.883 14:41:15 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:32.883 14:41:15 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:32.883 14:41:15 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:32.883 14:41:15 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:32.883 14:41:15 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:32.883 14:41:15 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:32.883 14:41:15 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:32.883 14:41:15 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:32.883 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:32.883 14:41:15 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:32.883 14:41:15 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:32.883 14:41:15 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:32.883 14:41:15 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:32.883 14:41:15 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:32.883 14:41:15 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:32.883 14:41:15 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:32.883 14:41:15 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:32.883 14:41:15 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:32.883 14:41:15 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:32.883 14:41:15 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:32.883 14:41:15 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:32.883 14:41:15 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:32.883 14:41:15 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:32.883 INFO: launching applications... 00:04:32.883 14:41:15 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:32.883 14:41:15 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:32.884 14:41:15 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:32.884 14:41:15 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:32.884 14:41:15 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:32.884 14:41:15 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:32.884 14:41:15 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:32.884 14:41:15 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:32.884 14:41:15 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=547060 00:04:32.884 14:41:15 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:32.884 Waiting for target to run... 00:04:32.884 14:41:15 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:32.884 14:41:15 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 547060 /var/tmp/spdk_tgt.sock 00:04:32.884 14:41:15 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 547060 ']' 00:04:32.884 14:41:15 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:32.884 14:41:15 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:32.884 14:41:15 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:32.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:32.884 14:41:15 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:32.884 14:41:15 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:32.884 [2024-12-11 14:41:15.641531] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:04:32.884 [2024-12-11 14:41:15.641619] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid547060 ] 00:04:33.452 [2024-12-11 14:41:16.137294] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:33.452 [2024-12-11 14:41:16.189065] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.018 14:41:16 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:34.018 14:41:16 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:34.018 14:41:16 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:34.018 00:04:34.018 14:41:16 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:34.018 INFO: shutting down applications... 00:04:34.018 14:41:16 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:34.018 14:41:16 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:34.018 14:41:16 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:34.018 14:41:16 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 547060 ]] 00:04:34.018 14:41:16 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 547060 00:04:34.018 14:41:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:34.018 14:41:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:34.018 14:41:16 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 547060 00:04:34.018 14:41:16 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:34.584 14:41:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:34.584 14:41:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:34.584 14:41:17 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 547060 00:04:34.584 14:41:17 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:34.584 14:41:17 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:34.584 14:41:17 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:34.584 14:41:17 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:34.584 SPDK target shutdown done 00:04:34.584 14:41:17 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:34.584 Success 00:04:34.584 00:04:34.584 real 0m1.683s 00:04:34.584 user 0m1.547s 00:04:34.584 sys 0m0.595s 00:04:34.584 14:41:17 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:34.584 14:41:17 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:34.584 ************************************ 00:04:34.584 END TEST json_config_extra_key 00:04:34.584 ************************************ 00:04:34.584 14:41:17 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:34.584 14:41:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:34.584 14:41:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:34.584 14:41:17 -- common/autotest_common.sh@10 -- # set +x 00:04:34.584 ************************************ 00:04:34.584 START TEST alias_rpc 00:04:34.584 ************************************ 00:04:34.584 14:41:17 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:34.584 * Looking for test storage... 00:04:34.584 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:34.584 14:41:17 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:34.584 14:41:17 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:34.584 14:41:17 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:34.584 14:41:17 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:34.584 14:41:17 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:34.584 14:41:17 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:34.584 14:41:17 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:34.584 14:41:17 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:34.584 14:41:17 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:34.585 14:41:17 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:34.585 14:41:17 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:34.585 14:41:17 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:34.585 14:41:17 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:34.585 14:41:17 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:34.585 14:41:17 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:34.585 14:41:17 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:34.585 14:41:17 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:34.585 14:41:17 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:34.585 14:41:17 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:34.585 14:41:17 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:34.585 14:41:17 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:34.585 14:41:17 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:34.585 14:41:17 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:34.585 14:41:17 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:34.585 14:41:17 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:34.585 14:41:17 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:34.585 14:41:17 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:34.585 14:41:17 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:34.585 14:41:17 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:34.585 14:41:17 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:34.585 14:41:17 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:34.585 14:41:17 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:34.585 14:41:17 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:34.585 14:41:17 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:34.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.585 --rc genhtml_branch_coverage=1 00:04:34.585 --rc genhtml_function_coverage=1 00:04:34.585 --rc genhtml_legend=1 00:04:34.585 --rc geninfo_all_blocks=1 00:04:34.585 --rc geninfo_unexecuted_blocks=1 00:04:34.585 00:04:34.585 ' 00:04:34.585 14:41:17 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:34.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.585 --rc genhtml_branch_coverage=1 00:04:34.585 --rc genhtml_function_coverage=1 00:04:34.585 --rc genhtml_legend=1 00:04:34.585 --rc geninfo_all_blocks=1 00:04:34.585 --rc geninfo_unexecuted_blocks=1 00:04:34.585 00:04:34.585 ' 00:04:34.585 14:41:17 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:34.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.585 --rc genhtml_branch_coverage=1 00:04:34.585 --rc genhtml_function_coverage=1 00:04:34.585 --rc genhtml_legend=1 00:04:34.585 --rc geninfo_all_blocks=1 00:04:34.585 --rc geninfo_unexecuted_blocks=1 00:04:34.585 00:04:34.585 ' 00:04:34.585 14:41:17 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:34.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.585 --rc genhtml_branch_coverage=1 00:04:34.585 --rc genhtml_function_coverage=1 00:04:34.585 --rc genhtml_legend=1 00:04:34.585 --rc geninfo_all_blocks=1 00:04:34.585 --rc geninfo_unexecuted_blocks=1 00:04:34.585 00:04:34.585 ' 00:04:34.585 14:41:17 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:34.585 14:41:17 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=547375 00:04:34.585 14:41:17 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:34.585 14:41:17 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 547375 00:04:34.585 14:41:17 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 547375 ']' 00:04:34.585 14:41:17 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:34.585 14:41:17 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:34.585 14:41:17 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:34.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:34.585 14:41:17 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:34.585 14:41:17 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.843 [2024-12-11 14:41:17.381259] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:04:34.843 [2024-12-11 14:41:17.381346] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid547375 ] 00:04:34.843 [2024-12-11 14:41:17.446394] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:34.843 [2024-12-11 14:41:17.501896] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.101 14:41:17 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:35.101 14:41:17 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:35.101 14:41:17 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:35.359 14:41:18 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 547375 00:04:35.359 14:41:18 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 547375 ']' 00:04:35.359 14:41:18 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 547375 00:04:35.359 14:41:18 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:35.359 14:41:18 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:35.359 14:41:18 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 547375 00:04:35.359 14:41:18 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:35.359 14:41:18 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:35.359 14:41:18 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 547375' 00:04:35.359 killing process with pid 547375 00:04:35.359 14:41:18 alias_rpc -- common/autotest_common.sh@973 -- # kill 547375 00:04:35.359 14:41:18 alias_rpc -- common/autotest_common.sh@978 -- # wait 547375 00:04:35.925 00:04:35.925 real 0m1.306s 00:04:35.925 user 0m1.423s 00:04:35.925 sys 0m0.422s 00:04:35.925 14:41:18 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:35.925 14:41:18 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.925 ************************************ 00:04:35.925 END TEST alias_rpc 00:04:35.925 ************************************ 00:04:35.925 14:41:18 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:35.925 14:41:18 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:35.925 14:41:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:35.925 14:41:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:35.925 14:41:18 -- common/autotest_common.sh@10 -- # set +x 00:04:35.925 ************************************ 00:04:35.925 START TEST spdkcli_tcp 00:04:35.925 ************************************ 00:04:35.925 14:41:18 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:35.925 * Looking for test storage... 00:04:35.925 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:35.925 14:41:18 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:35.925 14:41:18 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:04:35.925 14:41:18 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:35.925 14:41:18 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:35.925 14:41:18 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:35.925 14:41:18 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:35.925 14:41:18 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:35.925 14:41:18 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:35.925 14:41:18 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:35.925 14:41:18 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:35.925 14:41:18 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:35.925 14:41:18 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:35.925 14:41:18 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:35.925 14:41:18 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:35.925 14:41:18 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:35.925 14:41:18 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:35.925 14:41:18 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:35.925 14:41:18 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:35.925 14:41:18 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:35.925 14:41:18 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:35.925 14:41:18 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:35.925 14:41:18 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:35.925 14:41:18 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:35.925 14:41:18 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:35.925 14:41:18 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:35.925 14:41:18 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:35.925 14:41:18 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:35.925 14:41:18 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:35.925 14:41:18 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:35.925 14:41:18 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:35.925 14:41:18 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:35.925 14:41:18 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:35.925 14:41:18 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:35.925 14:41:18 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:35.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.925 --rc genhtml_branch_coverage=1 00:04:35.925 --rc genhtml_function_coverage=1 00:04:35.925 --rc genhtml_legend=1 00:04:35.925 --rc geninfo_all_blocks=1 00:04:35.925 --rc geninfo_unexecuted_blocks=1 00:04:35.925 00:04:35.925 ' 00:04:35.925 14:41:18 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:35.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.925 --rc genhtml_branch_coverage=1 00:04:35.925 --rc genhtml_function_coverage=1 00:04:35.925 --rc genhtml_legend=1 00:04:35.925 --rc geninfo_all_blocks=1 00:04:35.925 --rc geninfo_unexecuted_blocks=1 00:04:35.925 00:04:35.925 ' 00:04:35.925 14:41:18 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:35.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.925 --rc genhtml_branch_coverage=1 00:04:35.925 --rc genhtml_function_coverage=1 00:04:35.925 --rc genhtml_legend=1 00:04:35.925 --rc geninfo_all_blocks=1 00:04:35.925 --rc geninfo_unexecuted_blocks=1 00:04:35.925 00:04:35.925 ' 00:04:35.925 14:41:18 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:35.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.925 --rc genhtml_branch_coverage=1 00:04:35.925 --rc genhtml_function_coverage=1 00:04:35.925 --rc genhtml_legend=1 00:04:35.925 --rc geninfo_all_blocks=1 00:04:35.925 --rc geninfo_unexecuted_blocks=1 00:04:35.925 00:04:35.925 ' 00:04:35.925 14:41:18 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:35.925 14:41:18 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:35.925 14:41:18 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:35.925 14:41:18 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:35.925 14:41:18 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:35.925 14:41:18 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:35.925 14:41:18 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:35.925 14:41:18 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:35.925 14:41:18 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:35.925 14:41:18 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=547571 00:04:35.925 14:41:18 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:35.925 14:41:18 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 547571 00:04:35.925 14:41:18 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 547571 ']' 00:04:35.925 14:41:18 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:35.925 14:41:18 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:35.926 14:41:18 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:35.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:35.926 14:41:18 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:35.926 14:41:18 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:36.184 [2024-12-11 14:41:18.742605] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:04:36.184 [2024-12-11 14:41:18.742693] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid547571 ] 00:04:36.184 [2024-12-11 14:41:18.816319] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:36.184 [2024-12-11 14:41:18.879585] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:04:36.184 [2024-12-11 14:41:18.879589] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.442 14:41:19 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:36.442 14:41:19 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:36.442 14:41:19 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=547580 00:04:36.442 14:41:19 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:36.442 14:41:19 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:36.699 [ 00:04:36.699 "bdev_malloc_delete", 00:04:36.699 "bdev_malloc_create", 00:04:36.699 "bdev_null_resize", 00:04:36.699 "bdev_null_delete", 00:04:36.699 "bdev_null_create", 00:04:36.699 "bdev_nvme_cuse_unregister", 00:04:36.699 "bdev_nvme_cuse_register", 00:04:36.699 "bdev_opal_new_user", 00:04:36.699 "bdev_opal_set_lock_state", 00:04:36.699 "bdev_opal_delete", 00:04:36.699 "bdev_opal_get_info", 00:04:36.699 "bdev_opal_create", 00:04:36.699 "bdev_nvme_opal_revert", 00:04:36.699 "bdev_nvme_opal_init", 00:04:36.699 "bdev_nvme_send_cmd", 00:04:36.699 "bdev_nvme_set_keys", 00:04:36.699 "bdev_nvme_get_path_iostat", 00:04:36.699 "bdev_nvme_get_mdns_discovery_info", 00:04:36.699 "bdev_nvme_stop_mdns_discovery", 00:04:36.699 "bdev_nvme_start_mdns_discovery", 00:04:36.699 "bdev_nvme_set_multipath_policy", 00:04:36.699 "bdev_nvme_set_preferred_path", 00:04:36.699 "bdev_nvme_get_io_paths", 00:04:36.699 "bdev_nvme_remove_error_injection", 00:04:36.699 "bdev_nvme_add_error_injection", 00:04:36.699 "bdev_nvme_get_discovery_info", 00:04:36.699 "bdev_nvme_stop_discovery", 00:04:36.699 "bdev_nvme_start_discovery", 00:04:36.699 "bdev_nvme_get_controller_health_info", 00:04:36.699 "bdev_nvme_disable_controller", 00:04:36.699 "bdev_nvme_enable_controller", 00:04:36.699 "bdev_nvme_reset_controller", 00:04:36.699 "bdev_nvme_get_transport_statistics", 00:04:36.699 "bdev_nvme_apply_firmware", 00:04:36.699 "bdev_nvme_detach_controller", 00:04:36.699 "bdev_nvme_get_controllers", 00:04:36.699 "bdev_nvme_attach_controller", 00:04:36.699 "bdev_nvme_set_hotplug", 00:04:36.699 "bdev_nvme_set_options", 00:04:36.699 "bdev_passthru_delete", 00:04:36.699 "bdev_passthru_create", 00:04:36.699 "bdev_lvol_set_parent_bdev", 00:04:36.699 "bdev_lvol_set_parent", 00:04:36.699 "bdev_lvol_check_shallow_copy", 00:04:36.699 "bdev_lvol_start_shallow_copy", 00:04:36.699 "bdev_lvol_grow_lvstore", 00:04:36.700 "bdev_lvol_get_lvols", 00:04:36.700 "bdev_lvol_get_lvstores", 00:04:36.700 "bdev_lvol_delete", 00:04:36.700 "bdev_lvol_set_read_only", 00:04:36.700 "bdev_lvol_resize", 00:04:36.700 "bdev_lvol_decouple_parent", 00:04:36.700 "bdev_lvol_inflate", 00:04:36.700 "bdev_lvol_rename", 00:04:36.700 "bdev_lvol_clone_bdev", 00:04:36.700 "bdev_lvol_clone", 00:04:36.700 "bdev_lvol_snapshot", 00:04:36.700 "bdev_lvol_create", 00:04:36.700 "bdev_lvol_delete_lvstore", 00:04:36.700 "bdev_lvol_rename_lvstore", 00:04:36.700 "bdev_lvol_create_lvstore", 00:04:36.700 "bdev_raid_set_options", 00:04:36.700 "bdev_raid_remove_base_bdev", 00:04:36.700 "bdev_raid_add_base_bdev", 00:04:36.700 "bdev_raid_delete", 00:04:36.700 "bdev_raid_create", 00:04:36.700 "bdev_raid_get_bdevs", 00:04:36.700 "bdev_error_inject_error", 00:04:36.700 "bdev_error_delete", 00:04:36.700 "bdev_error_create", 00:04:36.700 "bdev_split_delete", 00:04:36.700 "bdev_split_create", 00:04:36.700 "bdev_delay_delete", 00:04:36.700 "bdev_delay_create", 00:04:36.700 "bdev_delay_update_latency", 00:04:36.700 "bdev_zone_block_delete", 00:04:36.700 "bdev_zone_block_create", 00:04:36.700 "blobfs_create", 00:04:36.700 "blobfs_detect", 00:04:36.700 "blobfs_set_cache_size", 00:04:36.700 "bdev_aio_delete", 00:04:36.700 "bdev_aio_rescan", 00:04:36.700 "bdev_aio_create", 00:04:36.700 "bdev_ftl_set_property", 00:04:36.700 "bdev_ftl_get_properties", 00:04:36.700 "bdev_ftl_get_stats", 00:04:36.700 "bdev_ftl_unmap", 00:04:36.700 "bdev_ftl_unload", 00:04:36.700 "bdev_ftl_delete", 00:04:36.700 "bdev_ftl_load", 00:04:36.700 "bdev_ftl_create", 00:04:36.700 "bdev_virtio_attach_controller", 00:04:36.700 "bdev_virtio_scsi_get_devices", 00:04:36.700 "bdev_virtio_detach_controller", 00:04:36.700 "bdev_virtio_blk_set_hotplug", 00:04:36.700 "bdev_iscsi_delete", 00:04:36.700 "bdev_iscsi_create", 00:04:36.700 "bdev_iscsi_set_options", 00:04:36.700 "accel_error_inject_error", 00:04:36.700 "ioat_scan_accel_module", 00:04:36.700 "dsa_scan_accel_module", 00:04:36.700 "iaa_scan_accel_module", 00:04:36.700 "vfu_virtio_create_fs_endpoint", 00:04:36.700 "vfu_virtio_create_scsi_endpoint", 00:04:36.700 "vfu_virtio_scsi_remove_target", 00:04:36.700 "vfu_virtio_scsi_add_target", 00:04:36.700 "vfu_virtio_create_blk_endpoint", 00:04:36.700 "vfu_virtio_delete_endpoint", 00:04:36.700 "keyring_file_remove_key", 00:04:36.700 "keyring_file_add_key", 00:04:36.700 "keyring_linux_set_options", 00:04:36.700 "fsdev_aio_delete", 00:04:36.700 "fsdev_aio_create", 00:04:36.700 "iscsi_get_histogram", 00:04:36.700 "iscsi_enable_histogram", 00:04:36.700 "iscsi_set_options", 00:04:36.700 "iscsi_get_auth_groups", 00:04:36.700 "iscsi_auth_group_remove_secret", 00:04:36.700 "iscsi_auth_group_add_secret", 00:04:36.700 "iscsi_delete_auth_group", 00:04:36.700 "iscsi_create_auth_group", 00:04:36.700 "iscsi_set_discovery_auth", 00:04:36.700 "iscsi_get_options", 00:04:36.700 "iscsi_target_node_request_logout", 00:04:36.700 "iscsi_target_node_set_redirect", 00:04:36.700 "iscsi_target_node_set_auth", 00:04:36.700 "iscsi_target_node_add_lun", 00:04:36.700 "iscsi_get_stats", 00:04:36.700 "iscsi_get_connections", 00:04:36.700 "iscsi_portal_group_set_auth", 00:04:36.700 "iscsi_start_portal_group", 00:04:36.700 "iscsi_delete_portal_group", 00:04:36.700 "iscsi_create_portal_group", 00:04:36.700 "iscsi_get_portal_groups", 00:04:36.700 "iscsi_delete_target_node", 00:04:36.700 "iscsi_target_node_remove_pg_ig_maps", 00:04:36.700 "iscsi_target_node_add_pg_ig_maps", 00:04:36.700 "iscsi_create_target_node", 00:04:36.700 "iscsi_get_target_nodes", 00:04:36.700 "iscsi_delete_initiator_group", 00:04:36.700 "iscsi_initiator_group_remove_initiators", 00:04:36.700 "iscsi_initiator_group_add_initiators", 00:04:36.700 "iscsi_create_initiator_group", 00:04:36.700 "iscsi_get_initiator_groups", 00:04:36.700 "nvmf_set_crdt", 00:04:36.700 "nvmf_set_config", 00:04:36.700 "nvmf_set_max_subsystems", 00:04:36.700 "nvmf_stop_mdns_prr", 00:04:36.700 "nvmf_publish_mdns_prr", 00:04:36.700 "nvmf_subsystem_get_listeners", 00:04:36.700 "nvmf_subsystem_get_qpairs", 00:04:36.700 "nvmf_subsystem_get_controllers", 00:04:36.700 "nvmf_get_stats", 00:04:36.700 "nvmf_get_transports", 00:04:36.700 "nvmf_create_transport", 00:04:36.700 "nvmf_get_targets", 00:04:36.700 "nvmf_delete_target", 00:04:36.700 "nvmf_create_target", 00:04:36.700 "nvmf_subsystem_allow_any_host", 00:04:36.700 "nvmf_subsystem_set_keys", 00:04:36.700 "nvmf_subsystem_remove_host", 00:04:36.700 "nvmf_subsystem_add_host", 00:04:36.700 "nvmf_ns_remove_host", 00:04:36.700 "nvmf_ns_add_host", 00:04:36.700 "nvmf_subsystem_remove_ns", 00:04:36.700 "nvmf_subsystem_set_ns_ana_group", 00:04:36.700 "nvmf_subsystem_add_ns", 00:04:36.700 "nvmf_subsystem_listener_set_ana_state", 00:04:36.700 "nvmf_discovery_get_referrals", 00:04:36.700 "nvmf_discovery_remove_referral", 00:04:36.700 "nvmf_discovery_add_referral", 00:04:36.700 "nvmf_subsystem_remove_listener", 00:04:36.700 "nvmf_subsystem_add_listener", 00:04:36.700 "nvmf_delete_subsystem", 00:04:36.700 "nvmf_create_subsystem", 00:04:36.700 "nvmf_get_subsystems", 00:04:36.700 "env_dpdk_get_mem_stats", 00:04:36.700 "nbd_get_disks", 00:04:36.700 "nbd_stop_disk", 00:04:36.700 "nbd_start_disk", 00:04:36.700 "ublk_recover_disk", 00:04:36.700 "ublk_get_disks", 00:04:36.700 "ublk_stop_disk", 00:04:36.700 "ublk_start_disk", 00:04:36.700 "ublk_destroy_target", 00:04:36.700 "ublk_create_target", 00:04:36.700 "virtio_blk_create_transport", 00:04:36.700 "virtio_blk_get_transports", 00:04:36.700 "vhost_controller_set_coalescing", 00:04:36.700 "vhost_get_controllers", 00:04:36.700 "vhost_delete_controller", 00:04:36.700 "vhost_create_blk_controller", 00:04:36.700 "vhost_scsi_controller_remove_target", 00:04:36.700 "vhost_scsi_controller_add_target", 00:04:36.700 "vhost_start_scsi_controller", 00:04:36.700 "vhost_create_scsi_controller", 00:04:36.700 "thread_set_cpumask", 00:04:36.700 "scheduler_set_options", 00:04:36.700 "framework_get_governor", 00:04:36.700 "framework_get_scheduler", 00:04:36.700 "framework_set_scheduler", 00:04:36.700 "framework_get_reactors", 00:04:36.700 "thread_get_io_channels", 00:04:36.700 "thread_get_pollers", 00:04:36.700 "thread_get_stats", 00:04:36.700 "framework_monitor_context_switch", 00:04:36.700 "spdk_kill_instance", 00:04:36.700 "log_enable_timestamps", 00:04:36.700 "log_get_flags", 00:04:36.700 "log_clear_flag", 00:04:36.700 "log_set_flag", 00:04:36.700 "log_get_level", 00:04:36.700 "log_set_level", 00:04:36.700 "log_get_print_level", 00:04:36.700 "log_set_print_level", 00:04:36.700 "framework_enable_cpumask_locks", 00:04:36.700 "framework_disable_cpumask_locks", 00:04:36.700 "framework_wait_init", 00:04:36.700 "framework_start_init", 00:04:36.700 "scsi_get_devices", 00:04:36.700 "bdev_get_histogram", 00:04:36.700 "bdev_enable_histogram", 00:04:36.700 "bdev_set_qos_limit", 00:04:36.700 "bdev_set_qd_sampling_period", 00:04:36.700 "bdev_get_bdevs", 00:04:36.700 "bdev_reset_iostat", 00:04:36.700 "bdev_get_iostat", 00:04:36.700 "bdev_examine", 00:04:36.700 "bdev_wait_for_examine", 00:04:36.700 "bdev_set_options", 00:04:36.700 "accel_get_stats", 00:04:36.700 "accel_set_options", 00:04:36.700 "accel_set_driver", 00:04:36.700 "accel_crypto_key_destroy", 00:04:36.700 "accel_crypto_keys_get", 00:04:36.700 "accel_crypto_key_create", 00:04:36.700 "accel_assign_opc", 00:04:36.700 "accel_get_module_info", 00:04:36.700 "accel_get_opc_assignments", 00:04:36.700 "vmd_rescan", 00:04:36.700 "vmd_remove_device", 00:04:36.700 "vmd_enable", 00:04:36.700 "sock_get_default_impl", 00:04:36.700 "sock_set_default_impl", 00:04:36.700 "sock_impl_set_options", 00:04:36.700 "sock_impl_get_options", 00:04:36.700 "iobuf_get_stats", 00:04:36.700 "iobuf_set_options", 00:04:36.700 "keyring_get_keys", 00:04:36.700 "vfu_tgt_set_base_path", 00:04:36.700 "framework_get_pci_devices", 00:04:36.700 "framework_get_config", 00:04:36.700 "framework_get_subsystems", 00:04:36.700 "fsdev_set_opts", 00:04:36.700 "fsdev_get_opts", 00:04:36.700 "trace_get_info", 00:04:36.700 "trace_get_tpoint_group_mask", 00:04:36.700 "trace_disable_tpoint_group", 00:04:36.700 "trace_enable_tpoint_group", 00:04:36.700 "trace_clear_tpoint_mask", 00:04:36.700 "trace_set_tpoint_mask", 00:04:36.700 "notify_get_notifications", 00:04:36.700 "notify_get_types", 00:04:36.700 "spdk_get_version", 00:04:36.700 "rpc_get_methods" 00:04:36.700 ] 00:04:36.700 14:41:19 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:36.700 14:41:19 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:36.700 14:41:19 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:36.700 14:41:19 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:36.700 14:41:19 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 547571 00:04:36.700 14:41:19 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 547571 ']' 00:04:36.700 14:41:19 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 547571 00:04:36.700 14:41:19 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:36.958 14:41:19 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:36.958 14:41:19 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 547571 00:04:36.958 14:41:19 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:36.958 14:41:19 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:36.958 14:41:19 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 547571' 00:04:36.958 killing process with pid 547571 00:04:36.958 14:41:19 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 547571 00:04:36.958 14:41:19 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 547571 00:04:37.216 00:04:37.216 real 0m1.395s 00:04:37.216 user 0m2.514s 00:04:37.216 sys 0m0.472s 00:04:37.216 14:41:19 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:37.216 14:41:19 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:37.216 ************************************ 00:04:37.216 END TEST spdkcli_tcp 00:04:37.216 ************************************ 00:04:37.216 14:41:19 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:37.216 14:41:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:37.216 14:41:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:37.216 14:41:19 -- common/autotest_common.sh@10 -- # set +x 00:04:37.216 ************************************ 00:04:37.216 START TEST dpdk_mem_utility 00:04:37.216 ************************************ 00:04:37.216 14:41:19 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:37.474 * Looking for test storage... 00:04:37.474 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:37.474 14:41:20 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:37.474 14:41:20 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:04:37.474 14:41:20 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:37.474 14:41:20 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:37.474 14:41:20 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:37.474 14:41:20 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:37.474 14:41:20 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:37.474 14:41:20 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:37.474 14:41:20 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:37.474 14:41:20 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:37.474 14:41:20 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:37.474 14:41:20 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:37.474 14:41:20 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:37.474 14:41:20 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:37.474 14:41:20 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:37.474 14:41:20 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:37.474 14:41:20 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:37.474 14:41:20 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:37.474 14:41:20 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:37.474 14:41:20 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:37.474 14:41:20 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:37.474 14:41:20 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:37.474 14:41:20 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:37.474 14:41:20 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:37.474 14:41:20 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:37.474 14:41:20 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:37.474 14:41:20 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:37.474 14:41:20 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:37.474 14:41:20 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:37.474 14:41:20 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:37.474 14:41:20 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:37.474 14:41:20 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:37.474 14:41:20 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:37.474 14:41:20 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:37.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.474 --rc genhtml_branch_coverage=1 00:04:37.474 --rc genhtml_function_coverage=1 00:04:37.474 --rc genhtml_legend=1 00:04:37.474 --rc geninfo_all_blocks=1 00:04:37.474 --rc geninfo_unexecuted_blocks=1 00:04:37.474 00:04:37.474 ' 00:04:37.474 14:41:20 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:37.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.474 --rc genhtml_branch_coverage=1 00:04:37.474 --rc genhtml_function_coverage=1 00:04:37.474 --rc genhtml_legend=1 00:04:37.474 --rc geninfo_all_blocks=1 00:04:37.474 --rc geninfo_unexecuted_blocks=1 00:04:37.474 00:04:37.474 ' 00:04:37.474 14:41:20 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:37.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.474 --rc genhtml_branch_coverage=1 00:04:37.474 --rc genhtml_function_coverage=1 00:04:37.474 --rc genhtml_legend=1 00:04:37.474 --rc geninfo_all_blocks=1 00:04:37.474 --rc geninfo_unexecuted_blocks=1 00:04:37.474 00:04:37.474 ' 00:04:37.474 14:41:20 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:37.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.474 --rc genhtml_branch_coverage=1 00:04:37.474 --rc genhtml_function_coverage=1 00:04:37.474 --rc genhtml_legend=1 00:04:37.474 --rc geninfo_all_blocks=1 00:04:37.474 --rc geninfo_unexecuted_blocks=1 00:04:37.474 00:04:37.474 ' 00:04:37.474 14:41:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:37.474 14:41:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=547784 00:04:37.474 14:41:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:37.474 14:41:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 547784 00:04:37.474 14:41:20 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 547784 ']' 00:04:37.474 14:41:20 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:37.474 14:41:20 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:37.474 14:41:20 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:37.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:37.474 14:41:20 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:37.474 14:41:20 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:37.474 [2024-12-11 14:41:20.181002] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:04:37.474 [2024-12-11 14:41:20.181096] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid547784 ] 00:04:37.732 [2024-12-11 14:41:20.249707] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:37.732 [2024-12-11 14:41:20.308915] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.991 14:41:20 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:37.991 14:41:20 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:37.991 14:41:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:37.991 14:41:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:37.991 14:41:20 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:37.991 14:41:20 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:37.991 { 00:04:37.991 "filename": "/tmp/spdk_mem_dump.txt" 00:04:37.991 } 00:04:37.991 14:41:20 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:37.991 14:41:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:37.991 DPDK memory size 818.000000 MiB in 1 heap(s) 00:04:37.991 1 heaps totaling size 818.000000 MiB 00:04:37.991 size: 818.000000 MiB heap id: 0 00:04:37.991 end heaps---------- 00:04:37.991 9 mempools totaling size 603.782043 MiB 00:04:37.991 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:37.991 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:37.991 size: 100.555481 MiB name: bdev_io_547784 00:04:37.991 size: 50.003479 MiB name: msgpool_547784 00:04:37.991 size: 36.509338 MiB name: fsdev_io_547784 00:04:37.991 size: 21.763794 MiB name: PDU_Pool 00:04:37.991 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:37.991 size: 4.133484 MiB name: evtpool_547784 00:04:37.991 size: 0.026123 MiB name: Session_Pool 00:04:37.991 end mempools------- 00:04:37.991 6 memzones totaling size 4.142822 MiB 00:04:37.991 size: 1.000366 MiB name: RG_ring_0_547784 00:04:37.991 size: 1.000366 MiB name: RG_ring_1_547784 00:04:37.991 size: 1.000366 MiB name: RG_ring_4_547784 00:04:37.991 size: 1.000366 MiB name: RG_ring_5_547784 00:04:37.991 size: 0.125366 MiB name: RG_ring_2_547784 00:04:37.991 size: 0.015991 MiB name: RG_ring_3_547784 00:04:37.991 end memzones------- 00:04:37.991 14:41:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:37.991 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:04:37.991 list of free elements. size: 10.852478 MiB 00:04:37.991 element at address: 0x200019200000 with size: 0.999878 MiB 00:04:37.991 element at address: 0x200019400000 with size: 0.999878 MiB 00:04:37.991 element at address: 0x200000400000 with size: 0.998535 MiB 00:04:37.991 element at address: 0x200032000000 with size: 0.994446 MiB 00:04:37.991 element at address: 0x200006400000 with size: 0.959839 MiB 00:04:37.991 element at address: 0x200012c00000 with size: 0.944275 MiB 00:04:37.991 element at address: 0x200019600000 with size: 0.936584 MiB 00:04:37.991 element at address: 0x200000200000 with size: 0.717346 MiB 00:04:37.991 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:04:37.991 element at address: 0x200000c00000 with size: 0.495422 MiB 00:04:37.991 element at address: 0x20000a600000 with size: 0.490723 MiB 00:04:37.991 element at address: 0x200019800000 with size: 0.485657 MiB 00:04:37.991 element at address: 0x200003e00000 with size: 0.481934 MiB 00:04:37.991 element at address: 0x200028200000 with size: 0.410034 MiB 00:04:37.991 element at address: 0x200000800000 with size: 0.355042 MiB 00:04:37.991 list of standard malloc elements. size: 199.218628 MiB 00:04:37.991 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:04:37.991 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:04:37.991 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:37.991 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:04:37.991 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:04:37.991 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:37.991 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:04:37.991 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:37.991 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:04:37.991 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:37.991 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:37.991 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:04:37.991 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:04:37.991 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:04:37.991 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:04:37.991 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:04:37.991 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:04:37.991 element at address: 0x20000085b040 with size: 0.000183 MiB 00:04:37.991 element at address: 0x20000085f300 with size: 0.000183 MiB 00:04:37.991 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:04:37.991 element at address: 0x20000087f680 with size: 0.000183 MiB 00:04:37.991 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:04:37.991 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:04:37.991 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:04:37.991 element at address: 0x200000cff000 with size: 0.000183 MiB 00:04:37.991 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:04:37.991 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:04:37.991 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:04:37.991 element at address: 0x200003efb980 with size: 0.000183 MiB 00:04:37.991 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:04:37.991 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:04:37.991 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:04:37.991 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:04:37.991 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:04:37.991 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:04:37.991 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:04:37.991 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:04:37.991 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:04:37.991 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:04:37.991 element at address: 0x200028268f80 with size: 0.000183 MiB 00:04:37.991 element at address: 0x200028269040 with size: 0.000183 MiB 00:04:37.991 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:04:37.991 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:04:37.991 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:04:37.991 list of memzone associated elements. size: 607.928894 MiB 00:04:37.992 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:04:37.992 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:37.992 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:04:37.992 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:37.992 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:04:37.992 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_547784_0 00:04:37.992 element at address: 0x200000dff380 with size: 48.003052 MiB 00:04:37.992 associated memzone info: size: 48.002930 MiB name: MP_msgpool_547784_0 00:04:37.992 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:04:37.992 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_547784_0 00:04:37.992 element at address: 0x2000199be940 with size: 20.255554 MiB 00:04:37.992 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:37.992 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:04:37.992 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:37.992 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:04:37.992 associated memzone info: size: 3.000122 MiB name: MP_evtpool_547784_0 00:04:37.992 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:04:37.992 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_547784 00:04:37.992 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:37.992 associated memzone info: size: 1.007996 MiB name: MP_evtpool_547784 00:04:37.992 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:04:37.992 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:37.992 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:04:37.992 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:37.992 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:04:37.992 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:37.992 element at address: 0x200003efba40 with size: 1.008118 MiB 00:04:37.992 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:37.992 element at address: 0x200000cff180 with size: 1.000488 MiB 00:04:37.992 associated memzone info: size: 1.000366 MiB name: RG_ring_0_547784 00:04:37.992 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:04:37.992 associated memzone info: size: 1.000366 MiB name: RG_ring_1_547784 00:04:37.992 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:04:37.992 associated memzone info: size: 1.000366 MiB name: RG_ring_4_547784 00:04:37.992 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:04:37.992 associated memzone info: size: 1.000366 MiB name: RG_ring_5_547784 00:04:37.992 element at address: 0x20000087f740 with size: 0.500488 MiB 00:04:37.992 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_547784 00:04:37.992 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:04:37.992 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_547784 00:04:37.992 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:04:37.992 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:37.992 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:04:37.992 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:37.992 element at address: 0x20001987c540 with size: 0.250488 MiB 00:04:37.992 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:37.992 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:04:37.992 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_547784 00:04:37.992 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:04:37.992 associated memzone info: size: 0.125366 MiB name: RG_ring_2_547784 00:04:37.992 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:04:37.992 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:37.992 element at address: 0x200028269100 with size: 0.023743 MiB 00:04:37.992 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:37.992 element at address: 0x20000085b100 with size: 0.016113 MiB 00:04:37.992 associated memzone info: size: 0.015991 MiB name: RG_ring_3_547784 00:04:37.992 element at address: 0x20002826f240 with size: 0.002441 MiB 00:04:37.992 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:37.992 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:04:37.992 associated memzone info: size: 0.000183 MiB name: MP_msgpool_547784 00:04:37.992 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:04:37.992 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_547784 00:04:37.992 element at address: 0x20000085af00 with size: 0.000305 MiB 00:04:37.992 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_547784 00:04:37.992 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:04:37.992 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:37.992 14:41:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:37.992 14:41:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 547784 00:04:37.992 14:41:20 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 547784 ']' 00:04:37.992 14:41:20 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 547784 00:04:37.992 14:41:20 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:37.992 14:41:20 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:37.992 14:41:20 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 547784 00:04:37.992 14:41:20 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:37.992 14:41:20 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:37.992 14:41:20 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 547784' 00:04:37.992 killing process with pid 547784 00:04:37.992 14:41:20 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 547784 00:04:37.992 14:41:20 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 547784 00:04:38.558 00:04:38.558 real 0m1.174s 00:04:38.558 user 0m1.154s 00:04:38.558 sys 0m0.422s 00:04:38.558 14:41:21 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:38.558 14:41:21 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:38.558 ************************************ 00:04:38.558 END TEST dpdk_mem_utility 00:04:38.558 ************************************ 00:04:38.558 14:41:21 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:38.558 14:41:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:38.558 14:41:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:38.558 14:41:21 -- common/autotest_common.sh@10 -- # set +x 00:04:38.558 ************************************ 00:04:38.558 START TEST event 00:04:38.558 ************************************ 00:04:38.558 14:41:21 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:38.558 * Looking for test storage... 00:04:38.558 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:38.558 14:41:21 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:38.558 14:41:21 event -- common/autotest_common.sh@1711 -- # lcov --version 00:04:38.558 14:41:21 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:38.816 14:41:21 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:38.816 14:41:21 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:38.816 14:41:21 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:38.816 14:41:21 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:38.816 14:41:21 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:38.816 14:41:21 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:38.816 14:41:21 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:38.816 14:41:21 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:38.816 14:41:21 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:38.816 14:41:21 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:38.816 14:41:21 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:38.816 14:41:21 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:38.816 14:41:21 event -- scripts/common.sh@344 -- # case "$op" in 00:04:38.816 14:41:21 event -- scripts/common.sh@345 -- # : 1 00:04:38.816 14:41:21 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:38.816 14:41:21 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:38.816 14:41:21 event -- scripts/common.sh@365 -- # decimal 1 00:04:38.816 14:41:21 event -- scripts/common.sh@353 -- # local d=1 00:04:38.816 14:41:21 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:38.816 14:41:21 event -- scripts/common.sh@355 -- # echo 1 00:04:38.816 14:41:21 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:38.816 14:41:21 event -- scripts/common.sh@366 -- # decimal 2 00:04:38.816 14:41:21 event -- scripts/common.sh@353 -- # local d=2 00:04:38.816 14:41:21 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:38.816 14:41:21 event -- scripts/common.sh@355 -- # echo 2 00:04:38.816 14:41:21 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:38.816 14:41:21 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:38.816 14:41:21 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:38.816 14:41:21 event -- scripts/common.sh@368 -- # return 0 00:04:38.816 14:41:21 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:38.816 14:41:21 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:38.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.816 --rc genhtml_branch_coverage=1 00:04:38.816 --rc genhtml_function_coverage=1 00:04:38.816 --rc genhtml_legend=1 00:04:38.816 --rc geninfo_all_blocks=1 00:04:38.816 --rc geninfo_unexecuted_blocks=1 00:04:38.816 00:04:38.816 ' 00:04:38.816 14:41:21 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:38.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.816 --rc genhtml_branch_coverage=1 00:04:38.816 --rc genhtml_function_coverage=1 00:04:38.816 --rc genhtml_legend=1 00:04:38.816 --rc geninfo_all_blocks=1 00:04:38.816 --rc geninfo_unexecuted_blocks=1 00:04:38.816 00:04:38.816 ' 00:04:38.816 14:41:21 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:38.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.816 --rc genhtml_branch_coverage=1 00:04:38.816 --rc genhtml_function_coverage=1 00:04:38.816 --rc genhtml_legend=1 00:04:38.816 --rc geninfo_all_blocks=1 00:04:38.816 --rc geninfo_unexecuted_blocks=1 00:04:38.816 00:04:38.816 ' 00:04:38.816 14:41:21 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:38.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.816 --rc genhtml_branch_coverage=1 00:04:38.816 --rc genhtml_function_coverage=1 00:04:38.816 --rc genhtml_legend=1 00:04:38.816 --rc geninfo_all_blocks=1 00:04:38.816 --rc geninfo_unexecuted_blocks=1 00:04:38.816 00:04:38.816 ' 00:04:38.816 14:41:21 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:38.816 14:41:21 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:38.816 14:41:21 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:38.816 14:41:21 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:04:38.816 14:41:21 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:38.816 14:41:21 event -- common/autotest_common.sh@10 -- # set +x 00:04:38.816 ************************************ 00:04:38.816 START TEST event_perf 00:04:38.816 ************************************ 00:04:38.816 14:41:21 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:38.816 Running I/O for 1 seconds...[2024-12-11 14:41:21.394033] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:04:38.816 [2024-12-11 14:41:21.394103] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid547988 ] 00:04:38.816 [2024-12-11 14:41:21.462936] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:38.816 [2024-12-11 14:41:21.525402] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:04:38.816 [2024-12-11 14:41:21.525508] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:04:38.816 [2024-12-11 14:41:21.525587] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:04:38.816 [2024-12-11 14:41:21.525592] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.187 Running I/O for 1 seconds... 00:04:40.187 lcore 0: 231960 00:04:40.187 lcore 1: 231959 00:04:40.187 lcore 2: 231959 00:04:40.187 lcore 3: 231959 00:04:40.187 done. 00:04:40.187 00:04:40.187 real 0m1.211s 00:04:40.187 user 0m4.138s 00:04:40.187 sys 0m0.069s 00:04:40.187 14:41:22 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:40.187 14:41:22 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:40.187 ************************************ 00:04:40.187 END TEST event_perf 00:04:40.187 ************************************ 00:04:40.187 14:41:22 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:40.187 14:41:22 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:40.187 14:41:22 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:40.187 14:41:22 event -- common/autotest_common.sh@10 -- # set +x 00:04:40.187 ************************************ 00:04:40.187 START TEST event_reactor 00:04:40.187 ************************************ 00:04:40.187 14:41:22 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:40.187 [2024-12-11 14:41:22.658828] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:04:40.187 [2024-12-11 14:41:22.658905] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid548143 ] 00:04:40.187 [2024-12-11 14:41:22.729285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.187 [2024-12-11 14:41:22.788891] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.120 test_start 00:04:41.120 oneshot 00:04:41.120 tick 100 00:04:41.120 tick 100 00:04:41.120 tick 250 00:04:41.120 tick 100 00:04:41.120 tick 100 00:04:41.120 tick 100 00:04:41.120 tick 250 00:04:41.120 tick 500 00:04:41.120 tick 100 00:04:41.120 tick 100 00:04:41.120 tick 250 00:04:41.120 tick 100 00:04:41.120 tick 100 00:04:41.120 test_end 00:04:41.120 00:04:41.120 real 0m1.206s 00:04:41.120 user 0m1.133s 00:04:41.120 sys 0m0.069s 00:04:41.120 14:41:23 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:41.120 14:41:23 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:41.120 ************************************ 00:04:41.120 END TEST event_reactor 00:04:41.120 ************************************ 00:04:41.120 14:41:23 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:41.120 14:41:23 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:41.120 14:41:23 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:41.120 14:41:23 event -- common/autotest_common.sh@10 -- # set +x 00:04:41.378 ************************************ 00:04:41.378 START TEST event_reactor_perf 00:04:41.378 ************************************ 00:04:41.378 14:41:23 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:41.378 [2024-12-11 14:41:23.911663] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:04:41.378 [2024-12-11 14:41:23.911726] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid548369 ] 00:04:41.378 [2024-12-11 14:41:23.979777] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.378 [2024-12-11 14:41:24.035761] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.751 test_start 00:04:42.751 test_end 00:04:42.751 Performance: 448936 events per second 00:04:42.751 00:04:42.751 real 0m1.200s 00:04:42.751 user 0m1.122s 00:04:42.751 sys 0m0.073s 00:04:42.751 14:41:25 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:42.751 14:41:25 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:42.751 ************************************ 00:04:42.751 END TEST event_reactor_perf 00:04:42.751 ************************************ 00:04:42.751 14:41:25 event -- event/event.sh@49 -- # uname -s 00:04:42.751 14:41:25 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:42.751 14:41:25 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:42.751 14:41:25 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:42.751 14:41:25 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:42.751 14:41:25 event -- common/autotest_common.sh@10 -- # set +x 00:04:42.751 ************************************ 00:04:42.751 START TEST event_scheduler 00:04:42.751 ************************************ 00:04:42.751 14:41:25 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:42.751 * Looking for test storage... 00:04:42.751 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:42.751 14:41:25 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:42.751 14:41:25 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:04:42.751 14:41:25 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:42.751 14:41:25 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:42.751 14:41:25 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:42.751 14:41:25 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:42.751 14:41:25 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:42.751 14:41:25 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:42.751 14:41:25 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:42.751 14:41:25 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:42.751 14:41:25 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:42.751 14:41:25 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:42.751 14:41:25 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:42.751 14:41:25 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:42.751 14:41:25 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:42.751 14:41:25 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:42.752 14:41:25 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:42.752 14:41:25 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:42.752 14:41:25 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:42.752 14:41:25 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:42.752 14:41:25 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:42.752 14:41:25 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:42.752 14:41:25 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:42.752 14:41:25 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:42.752 14:41:25 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:42.752 14:41:25 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:42.752 14:41:25 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:42.752 14:41:25 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:42.752 14:41:25 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:42.752 14:41:25 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:42.752 14:41:25 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:42.752 14:41:25 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:42.752 14:41:25 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:42.752 14:41:25 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:42.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.752 --rc genhtml_branch_coverage=1 00:04:42.752 --rc genhtml_function_coverage=1 00:04:42.752 --rc genhtml_legend=1 00:04:42.752 --rc geninfo_all_blocks=1 00:04:42.752 --rc geninfo_unexecuted_blocks=1 00:04:42.752 00:04:42.752 ' 00:04:42.752 14:41:25 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:42.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.752 --rc genhtml_branch_coverage=1 00:04:42.752 --rc genhtml_function_coverage=1 00:04:42.752 --rc genhtml_legend=1 00:04:42.752 --rc geninfo_all_blocks=1 00:04:42.752 --rc geninfo_unexecuted_blocks=1 00:04:42.752 00:04:42.752 ' 00:04:42.752 14:41:25 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:42.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.752 --rc genhtml_branch_coverage=1 00:04:42.752 --rc genhtml_function_coverage=1 00:04:42.752 --rc genhtml_legend=1 00:04:42.752 --rc geninfo_all_blocks=1 00:04:42.752 --rc geninfo_unexecuted_blocks=1 00:04:42.752 00:04:42.752 ' 00:04:42.752 14:41:25 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:42.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.752 --rc genhtml_branch_coverage=1 00:04:42.752 --rc genhtml_function_coverage=1 00:04:42.752 --rc genhtml_legend=1 00:04:42.752 --rc geninfo_all_blocks=1 00:04:42.752 --rc geninfo_unexecuted_blocks=1 00:04:42.752 00:04:42.752 ' 00:04:42.752 14:41:25 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:42.752 14:41:25 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=548603 00:04:42.752 14:41:25 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:42.752 14:41:25 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:42.752 14:41:25 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 548603 00:04:42.752 14:41:25 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 548603 ']' 00:04:42.752 14:41:25 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:42.752 14:41:25 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:42.752 14:41:25 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:42.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:42.752 14:41:25 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:42.752 14:41:25 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:42.752 [2024-12-11 14:41:25.344070] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:04:42.752 [2024-12-11 14:41:25.344148] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid548603 ] 00:04:42.752 [2024-12-11 14:41:25.411660] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:42.752 [2024-12-11 14:41:25.471830] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.752 [2024-12-11 14:41:25.471894] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:04:42.752 [2024-12-11 14:41:25.471956] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:04:42.752 [2024-12-11 14:41:25.471959] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:04:43.010 14:41:25 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:43.010 14:41:25 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:04:43.010 14:41:25 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:43.010 14:41:25 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.010 14:41:25 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:43.010 [2024-12-11 14:41:25.576887] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:43.010 [2024-12-11 14:41:25.576928] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:43.010 [2024-12-11 14:41:25.576947] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:43.010 [2024-12-11 14:41:25.576958] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:43.010 [2024-12-11 14:41:25.576968] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:43.010 14:41:25 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.010 14:41:25 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:43.010 14:41:25 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.010 14:41:25 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:43.010 [2024-12-11 14:41:25.679853] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:43.010 14:41:25 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.010 14:41:25 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:43.010 14:41:25 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:43.010 14:41:25 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:43.010 14:41:25 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:43.010 ************************************ 00:04:43.010 START TEST scheduler_create_thread 00:04:43.010 ************************************ 00:04:43.010 14:41:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:04:43.010 14:41:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:43.010 14:41:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.010 14:41:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:43.010 2 00:04:43.010 14:41:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.010 14:41:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:43.010 14:41:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.010 14:41:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:43.010 3 00:04:43.010 14:41:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.010 14:41:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:43.010 14:41:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.010 14:41:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:43.010 4 00:04:43.010 14:41:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.010 14:41:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:43.011 14:41:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.011 14:41:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:43.011 5 00:04:43.011 14:41:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.011 14:41:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:43.011 14:41:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.011 14:41:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:43.011 6 00:04:43.011 14:41:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.011 14:41:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:43.011 14:41:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.011 14:41:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:43.011 7 00:04:43.011 14:41:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.011 14:41:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:43.011 14:41:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.011 14:41:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:43.011 8 00:04:43.011 14:41:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.011 14:41:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:43.011 14:41:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.011 14:41:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:43.011 9 00:04:43.011 14:41:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.011 14:41:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:43.011 14:41:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.011 14:41:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:43.268 10 00:04:43.268 14:41:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.268 14:41:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:43.268 14:41:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.268 14:41:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:43.268 14:41:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.268 14:41:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:43.268 14:41:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:43.268 14:41:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.268 14:41:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:43.268 14:41:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.268 14:41:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:43.268 14:41:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.268 14:41:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:43.268 14:41:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.268 14:41:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:43.268 14:41:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:43.268 14:41:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.268 14:41:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:43.832 14:41:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.832 00:04:43.832 real 0m0.589s 00:04:43.832 user 0m0.011s 00:04:43.832 sys 0m0.002s 00:04:43.832 14:41:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:43.832 14:41:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:43.832 ************************************ 00:04:43.832 END TEST scheduler_create_thread 00:04:43.832 ************************************ 00:04:43.832 14:41:26 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:43.832 14:41:26 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 548603 00:04:43.832 14:41:26 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 548603 ']' 00:04:43.832 14:41:26 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 548603 00:04:43.832 14:41:26 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:04:43.832 14:41:26 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:43.832 14:41:26 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 548603 00:04:43.832 14:41:26 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:43.832 14:41:26 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:43.832 14:41:26 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 548603' 00:04:43.832 killing process with pid 548603 00:04:43.832 14:41:26 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 548603 00:04:43.832 14:41:26 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 548603 00:04:44.090 [2024-12-11 14:41:26.780136] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:44.349 00:04:44.349 real 0m1.850s 00:04:44.349 user 0m2.521s 00:04:44.349 sys 0m0.350s 00:04:44.349 14:41:27 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:44.349 14:41:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:44.349 ************************************ 00:04:44.349 END TEST event_scheduler 00:04:44.349 ************************************ 00:04:44.349 14:41:27 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:44.349 14:41:27 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:44.349 14:41:27 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:44.349 14:41:27 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:44.349 14:41:27 event -- common/autotest_common.sh@10 -- # set +x 00:04:44.349 ************************************ 00:04:44.349 START TEST app_repeat 00:04:44.349 ************************************ 00:04:44.349 14:41:27 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:04:44.349 14:41:27 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:44.349 14:41:27 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:44.349 14:41:27 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:44.349 14:41:27 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:44.349 14:41:27 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:44.349 14:41:27 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:44.349 14:41:27 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:44.349 14:41:27 event.app_repeat -- event/event.sh@19 -- # repeat_pid=548797 00:04:44.349 14:41:27 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:44.349 14:41:27 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:44.349 14:41:27 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 548797' 00:04:44.349 Process app_repeat pid: 548797 00:04:44.349 14:41:27 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:44.349 14:41:27 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:44.349 spdk_app_start Round 0 00:04:44.349 14:41:27 event.app_repeat -- event/event.sh@25 -- # waitforlisten 548797 /var/tmp/spdk-nbd.sock 00:04:44.349 14:41:27 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 548797 ']' 00:04:44.349 14:41:27 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:44.349 14:41:27 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:44.349 14:41:27 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:44.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:44.349 14:41:27 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:44.349 14:41:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:44.349 [2024-12-11 14:41:27.087718] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:04:44.349 [2024-12-11 14:41:27.087784] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid548797 ] 00:04:44.607 [2024-12-11 14:41:27.154414] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:44.607 [2024-12-11 14:41:27.212775] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:04:44.607 [2024-12-11 14:41:27.212778] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.607 14:41:27 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:44.607 14:41:27 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:44.607 14:41:27 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:44.865 Malloc0 00:04:45.124 14:41:27 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:45.381 Malloc1 00:04:45.381 14:41:27 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:45.381 14:41:27 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:45.381 14:41:27 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:45.381 14:41:27 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:45.381 14:41:27 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:45.381 14:41:27 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:45.381 14:41:27 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:45.381 14:41:27 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:45.381 14:41:27 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:45.381 14:41:27 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:45.381 14:41:27 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:45.381 14:41:27 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:45.381 14:41:27 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:45.381 14:41:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:45.381 14:41:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:45.381 14:41:27 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:45.639 /dev/nbd0 00:04:45.639 14:41:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:45.639 14:41:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:45.639 14:41:28 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:45.639 14:41:28 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:45.639 14:41:28 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:45.639 14:41:28 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:45.639 14:41:28 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:45.639 14:41:28 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:45.639 14:41:28 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:45.639 14:41:28 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:45.639 14:41:28 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:45.639 1+0 records in 00:04:45.639 1+0 records out 00:04:45.639 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00023206 s, 17.7 MB/s 00:04:45.639 14:41:28 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:45.639 14:41:28 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:45.639 14:41:28 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:45.639 14:41:28 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:45.639 14:41:28 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:45.639 14:41:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:45.639 14:41:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:45.639 14:41:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:45.896 /dev/nbd1 00:04:45.896 14:41:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:45.896 14:41:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:45.896 14:41:28 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:45.896 14:41:28 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:45.896 14:41:28 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:45.896 14:41:28 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:45.896 14:41:28 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:45.896 14:41:28 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:45.896 14:41:28 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:45.896 14:41:28 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:45.896 14:41:28 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:45.896 1+0 records in 00:04:45.896 1+0 records out 00:04:45.896 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000189428 s, 21.6 MB/s 00:04:45.896 14:41:28 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:45.896 14:41:28 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:45.897 14:41:28 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:45.897 14:41:28 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:45.897 14:41:28 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:45.897 14:41:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:45.897 14:41:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:45.897 14:41:28 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:45.897 14:41:28 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:45.897 14:41:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:46.154 14:41:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:46.154 { 00:04:46.154 "nbd_device": "/dev/nbd0", 00:04:46.154 "bdev_name": "Malloc0" 00:04:46.154 }, 00:04:46.154 { 00:04:46.154 "nbd_device": "/dev/nbd1", 00:04:46.154 "bdev_name": "Malloc1" 00:04:46.154 } 00:04:46.154 ]' 00:04:46.154 14:41:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:46.154 { 00:04:46.154 "nbd_device": "/dev/nbd0", 00:04:46.154 "bdev_name": "Malloc0" 00:04:46.154 }, 00:04:46.154 { 00:04:46.154 "nbd_device": "/dev/nbd1", 00:04:46.154 "bdev_name": "Malloc1" 00:04:46.154 } 00:04:46.154 ]' 00:04:46.154 14:41:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:46.154 14:41:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:46.154 /dev/nbd1' 00:04:46.154 14:41:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:46.154 /dev/nbd1' 00:04:46.154 14:41:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:46.154 14:41:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:46.154 14:41:28 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:46.154 14:41:28 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:46.154 14:41:28 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:46.154 14:41:28 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:46.154 14:41:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:46.154 14:41:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:46.154 14:41:28 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:46.154 14:41:28 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:46.154 14:41:28 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:46.154 14:41:28 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:46.154 256+0 records in 00:04:46.154 256+0 records out 00:04:46.154 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00511274 s, 205 MB/s 00:04:46.154 14:41:28 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:46.154 14:41:28 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:46.154 256+0 records in 00:04:46.154 256+0 records out 00:04:46.154 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0202677 s, 51.7 MB/s 00:04:46.154 14:41:28 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:46.154 14:41:28 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:46.412 256+0 records in 00:04:46.412 256+0 records out 00:04:46.412 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0230103 s, 45.6 MB/s 00:04:46.412 14:41:28 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:46.412 14:41:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:46.412 14:41:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:46.412 14:41:28 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:46.412 14:41:28 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:46.412 14:41:28 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:46.412 14:41:28 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:46.412 14:41:28 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:46.412 14:41:28 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:46.412 14:41:28 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:46.412 14:41:28 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:46.412 14:41:28 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:46.412 14:41:28 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:46.412 14:41:28 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:46.412 14:41:28 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:46.412 14:41:28 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:46.412 14:41:28 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:46.412 14:41:28 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:46.412 14:41:28 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:46.669 14:41:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:46.670 14:41:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:46.670 14:41:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:46.670 14:41:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:46.670 14:41:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:46.670 14:41:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:46.670 14:41:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:46.670 14:41:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:46.670 14:41:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:46.670 14:41:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:46.927 14:41:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:46.927 14:41:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:46.927 14:41:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:46.927 14:41:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:46.927 14:41:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:46.927 14:41:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:46.927 14:41:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:46.927 14:41:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:46.927 14:41:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:46.927 14:41:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:46.927 14:41:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:47.185 14:41:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:47.185 14:41:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:47.185 14:41:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:47.185 14:41:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:47.185 14:41:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:47.185 14:41:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:47.185 14:41:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:47.185 14:41:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:47.185 14:41:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:47.185 14:41:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:47.185 14:41:29 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:47.185 14:41:29 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:47.185 14:41:29 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:47.442 14:41:30 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:47.700 [2024-12-11 14:41:30.391644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:47.700 [2024-12-11 14:41:30.448999] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.700 [2024-12-11 14:41:30.448999] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:04:47.958 [2024-12-11 14:41:30.506480] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:47.958 [2024-12-11 14:41:30.506583] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:50.483 14:41:33 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:50.483 14:41:33 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:50.483 spdk_app_start Round 1 00:04:50.483 14:41:33 event.app_repeat -- event/event.sh@25 -- # waitforlisten 548797 /var/tmp/spdk-nbd.sock 00:04:50.483 14:41:33 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 548797 ']' 00:04:50.483 14:41:33 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:50.483 14:41:33 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:50.483 14:41:33 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:50.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:50.483 14:41:33 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:50.483 14:41:33 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:50.740 14:41:33 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:50.740 14:41:33 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:50.740 14:41:33 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:50.998 Malloc0 00:04:50.998 14:41:33 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:51.256 Malloc1 00:04:51.256 14:41:33 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:51.256 14:41:33 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:51.256 14:41:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:51.256 14:41:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:51.256 14:41:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:51.256 14:41:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:51.256 14:41:33 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:51.256 14:41:33 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:51.256 14:41:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:51.256 14:41:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:51.256 14:41:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:51.256 14:41:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:51.256 14:41:33 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:51.256 14:41:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:51.256 14:41:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:51.256 14:41:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:51.822 /dev/nbd0 00:04:51.822 14:41:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:51.822 14:41:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:51.822 14:41:34 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:51.822 14:41:34 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:51.822 14:41:34 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:51.822 14:41:34 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:51.822 14:41:34 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:51.822 14:41:34 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:51.822 14:41:34 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:51.822 14:41:34 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:51.822 14:41:34 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:51.822 1+0 records in 00:04:51.822 1+0 records out 00:04:51.822 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000264655 s, 15.5 MB/s 00:04:51.822 14:41:34 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:51.822 14:41:34 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:51.822 14:41:34 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:51.822 14:41:34 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:51.822 14:41:34 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:51.822 14:41:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:51.822 14:41:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:51.822 14:41:34 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:52.088 /dev/nbd1 00:04:52.088 14:41:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:52.088 14:41:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:52.088 14:41:34 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:52.088 14:41:34 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:52.088 14:41:34 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:52.088 14:41:34 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:52.088 14:41:34 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:52.088 14:41:34 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:52.088 14:41:34 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:52.088 14:41:34 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:52.088 14:41:34 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:52.088 1+0 records in 00:04:52.088 1+0 records out 00:04:52.088 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000200199 s, 20.5 MB/s 00:04:52.088 14:41:34 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:52.088 14:41:34 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:52.088 14:41:34 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:52.088 14:41:34 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:52.088 14:41:34 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:52.088 14:41:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:52.088 14:41:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:52.088 14:41:34 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:52.088 14:41:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:52.088 14:41:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:52.380 14:41:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:52.380 { 00:04:52.380 "nbd_device": "/dev/nbd0", 00:04:52.380 "bdev_name": "Malloc0" 00:04:52.380 }, 00:04:52.380 { 00:04:52.380 "nbd_device": "/dev/nbd1", 00:04:52.380 "bdev_name": "Malloc1" 00:04:52.380 } 00:04:52.380 ]' 00:04:52.380 14:41:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:52.380 { 00:04:52.380 "nbd_device": "/dev/nbd0", 00:04:52.380 "bdev_name": "Malloc0" 00:04:52.380 }, 00:04:52.380 { 00:04:52.380 "nbd_device": "/dev/nbd1", 00:04:52.380 "bdev_name": "Malloc1" 00:04:52.380 } 00:04:52.380 ]' 00:04:52.380 14:41:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:52.380 14:41:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:52.380 /dev/nbd1' 00:04:52.380 14:41:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:52.380 /dev/nbd1' 00:04:52.380 14:41:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:52.380 14:41:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:52.380 14:41:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:52.380 14:41:34 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:52.380 14:41:34 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:52.380 14:41:34 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:52.380 14:41:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:52.380 14:41:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:52.380 14:41:34 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:52.380 14:41:34 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:52.380 14:41:34 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:52.380 14:41:34 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:52.380 256+0 records in 00:04:52.380 256+0 records out 00:04:52.380 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00503644 s, 208 MB/s 00:04:52.380 14:41:34 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:52.380 14:41:34 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:52.380 256+0 records in 00:04:52.380 256+0 records out 00:04:52.380 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0203165 s, 51.6 MB/s 00:04:52.380 14:41:34 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:52.380 14:41:34 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:52.380 256+0 records in 00:04:52.380 256+0 records out 00:04:52.380 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.021965 s, 47.7 MB/s 00:04:52.380 14:41:35 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:52.380 14:41:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:52.380 14:41:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:52.380 14:41:35 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:52.380 14:41:35 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:52.380 14:41:35 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:52.380 14:41:35 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:52.380 14:41:35 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:52.380 14:41:35 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:52.380 14:41:35 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:52.380 14:41:35 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:52.380 14:41:35 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:52.380 14:41:35 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:52.380 14:41:35 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:52.380 14:41:35 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:52.380 14:41:35 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:52.380 14:41:35 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:52.380 14:41:35 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:52.380 14:41:35 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:52.668 14:41:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:52.668 14:41:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:52.668 14:41:35 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:52.668 14:41:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:52.668 14:41:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:52.668 14:41:35 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:52.668 14:41:35 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:52.668 14:41:35 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:52.668 14:41:35 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:52.668 14:41:35 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:52.926 14:41:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:52.926 14:41:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:52.926 14:41:35 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:52.926 14:41:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:52.926 14:41:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:52.926 14:41:35 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:52.926 14:41:35 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:52.926 14:41:35 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:52.926 14:41:35 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:52.926 14:41:35 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:52.926 14:41:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:53.184 14:41:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:53.184 14:41:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:53.184 14:41:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:53.184 14:41:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:53.184 14:41:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:53.184 14:41:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:53.184 14:41:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:53.184 14:41:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:53.184 14:41:35 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:53.184 14:41:35 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:53.184 14:41:35 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:53.184 14:41:35 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:53.184 14:41:35 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:53.750 14:41:36 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:53.750 [2024-12-11 14:41:36.429511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:53.750 [2024-12-11 14:41:36.486042] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:04:53.750 [2024-12-11 14:41:36.486045] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.007 [2024-12-11 14:41:36.545711] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:54.007 [2024-12-11 14:41:36.545787] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:56.534 14:41:39 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:56.534 14:41:39 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:56.534 spdk_app_start Round 2 00:04:56.534 14:41:39 event.app_repeat -- event/event.sh@25 -- # waitforlisten 548797 /var/tmp/spdk-nbd.sock 00:04:56.534 14:41:39 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 548797 ']' 00:04:56.534 14:41:39 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:56.534 14:41:39 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:56.534 14:41:39 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:56.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:56.534 14:41:39 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:56.534 14:41:39 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:56.792 14:41:39 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:56.792 14:41:39 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:56.792 14:41:39 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:57.050 Malloc0 00:04:57.050 14:41:39 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:57.308 Malloc1 00:04:57.308 14:41:40 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:57.308 14:41:40 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:57.308 14:41:40 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:57.308 14:41:40 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:57.308 14:41:40 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:57.308 14:41:40 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:57.308 14:41:40 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:57.308 14:41:40 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:57.308 14:41:40 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:57.308 14:41:40 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:57.308 14:41:40 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:57.308 14:41:40 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:57.308 14:41:40 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:57.308 14:41:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:57.308 14:41:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:57.308 14:41:40 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:57.873 /dev/nbd0 00:04:57.873 14:41:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:57.873 14:41:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:57.873 14:41:40 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:57.873 14:41:40 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:57.873 14:41:40 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:57.873 14:41:40 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:57.873 14:41:40 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:57.873 14:41:40 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:57.873 14:41:40 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:57.873 14:41:40 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:57.873 14:41:40 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:57.873 1+0 records in 00:04:57.873 1+0 records out 00:04:57.873 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000182966 s, 22.4 MB/s 00:04:57.873 14:41:40 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:57.873 14:41:40 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:57.873 14:41:40 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:57.873 14:41:40 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:57.873 14:41:40 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:57.873 14:41:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:57.873 14:41:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:57.873 14:41:40 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:58.131 /dev/nbd1 00:04:58.131 14:41:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:58.131 14:41:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:58.131 14:41:40 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:58.131 14:41:40 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:58.131 14:41:40 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:58.131 14:41:40 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:58.131 14:41:40 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:58.131 14:41:40 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:58.131 14:41:40 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:58.131 14:41:40 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:58.131 14:41:40 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:58.131 1+0 records in 00:04:58.131 1+0 records out 00:04:58.131 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000218522 s, 18.7 MB/s 00:04:58.131 14:41:40 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:58.131 14:41:40 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:58.131 14:41:40 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:58.131 14:41:40 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:58.131 14:41:40 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:58.131 14:41:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:58.131 14:41:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:58.131 14:41:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:58.131 14:41:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:58.131 14:41:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:58.389 14:41:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:58.389 { 00:04:58.389 "nbd_device": "/dev/nbd0", 00:04:58.389 "bdev_name": "Malloc0" 00:04:58.389 }, 00:04:58.389 { 00:04:58.389 "nbd_device": "/dev/nbd1", 00:04:58.389 "bdev_name": "Malloc1" 00:04:58.389 } 00:04:58.389 ]' 00:04:58.389 14:41:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:58.389 { 00:04:58.389 "nbd_device": "/dev/nbd0", 00:04:58.389 "bdev_name": "Malloc0" 00:04:58.389 }, 00:04:58.389 { 00:04:58.389 "nbd_device": "/dev/nbd1", 00:04:58.389 "bdev_name": "Malloc1" 00:04:58.389 } 00:04:58.389 ]' 00:04:58.389 14:41:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:58.389 14:41:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:58.389 /dev/nbd1' 00:04:58.389 14:41:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:58.389 /dev/nbd1' 00:04:58.389 14:41:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:58.389 14:41:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:58.389 14:41:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:58.389 14:41:41 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:58.389 14:41:41 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:58.389 14:41:41 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:58.389 14:41:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:58.389 14:41:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:58.389 14:41:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:58.389 14:41:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:58.389 14:41:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:58.389 14:41:41 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:58.389 256+0 records in 00:04:58.389 256+0 records out 00:04:58.389 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00501529 s, 209 MB/s 00:04:58.389 14:41:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:58.389 14:41:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:58.389 256+0 records in 00:04:58.389 256+0 records out 00:04:58.389 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0201916 s, 51.9 MB/s 00:04:58.389 14:41:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:58.389 14:41:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:58.389 256+0 records in 00:04:58.389 256+0 records out 00:04:58.389 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0220256 s, 47.6 MB/s 00:04:58.389 14:41:41 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:58.389 14:41:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:58.389 14:41:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:58.389 14:41:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:58.389 14:41:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:58.389 14:41:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:58.389 14:41:41 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:58.389 14:41:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:58.389 14:41:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:58.389 14:41:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:58.389 14:41:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:58.389 14:41:41 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:58.389 14:41:41 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:58.389 14:41:41 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:58.389 14:41:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:58.389 14:41:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:58.389 14:41:41 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:58.389 14:41:41 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:58.389 14:41:41 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:58.647 14:41:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:58.647 14:41:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:58.647 14:41:41 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:58.647 14:41:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:58.647 14:41:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:58.648 14:41:41 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:58.648 14:41:41 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:58.648 14:41:41 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:58.648 14:41:41 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:58.648 14:41:41 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:58.905 14:41:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:58.905 14:41:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:58.905 14:41:41 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:58.905 14:41:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:58.905 14:41:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:58.905 14:41:41 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:58.905 14:41:41 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:58.905 14:41:41 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:58.905 14:41:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:58.905 14:41:41 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:58.905 14:41:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:59.471 14:41:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:59.471 14:41:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:59.471 14:41:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:59.471 14:41:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:59.471 14:41:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:59.471 14:41:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:59.471 14:41:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:59.471 14:41:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:59.471 14:41:42 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:59.471 14:41:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:59.471 14:41:42 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:59.471 14:41:42 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:59.471 14:41:42 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:59.729 14:41:42 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:59.988 [2024-12-11 14:41:42.518955] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:59.988 [2024-12-11 14:41:42.573228] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.988 [2024-12-11 14:41:42.573228] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:04:59.988 [2024-12-11 14:41:42.631939] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:59.988 [2024-12-11 14:41:42.632013] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:03.271 14:41:45 event.app_repeat -- event/event.sh@38 -- # waitforlisten 548797 /var/tmp/spdk-nbd.sock 00:05:03.271 14:41:45 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 548797 ']' 00:05:03.271 14:41:45 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:03.271 14:41:45 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:03.271 14:41:45 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:03.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:03.271 14:41:45 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:03.271 14:41:45 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:03.271 14:41:45 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:03.271 14:41:45 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:03.271 14:41:45 event.app_repeat -- event/event.sh@39 -- # killprocess 548797 00:05:03.271 14:41:45 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 548797 ']' 00:05:03.271 14:41:45 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 548797 00:05:03.271 14:41:45 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:03.271 14:41:45 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:03.271 14:41:45 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 548797 00:05:03.271 14:41:45 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:03.271 14:41:45 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:03.271 14:41:45 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 548797' 00:05:03.271 killing process with pid 548797 00:05:03.271 14:41:45 event.app_repeat -- common/autotest_common.sh@973 -- # kill 548797 00:05:03.271 14:41:45 event.app_repeat -- common/autotest_common.sh@978 -- # wait 548797 00:05:03.271 spdk_app_start is called in Round 0. 00:05:03.271 Shutdown signal received, stop current app iteration 00:05:03.271 Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 reinitialization... 00:05:03.271 spdk_app_start is called in Round 1. 00:05:03.271 Shutdown signal received, stop current app iteration 00:05:03.271 Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 reinitialization... 00:05:03.271 spdk_app_start is called in Round 2. 00:05:03.271 Shutdown signal received, stop current app iteration 00:05:03.271 Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 reinitialization... 00:05:03.271 spdk_app_start is called in Round 3. 00:05:03.271 Shutdown signal received, stop current app iteration 00:05:03.271 14:41:45 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:03.271 14:41:45 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:03.271 00:05:03.271 real 0m18.752s 00:05:03.271 user 0m41.523s 00:05:03.271 sys 0m3.252s 00:05:03.271 14:41:45 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:03.271 14:41:45 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:03.271 ************************************ 00:05:03.271 END TEST app_repeat 00:05:03.271 ************************************ 00:05:03.271 14:41:45 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:03.271 14:41:45 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:03.271 14:41:45 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:03.271 14:41:45 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.271 14:41:45 event -- common/autotest_common.sh@10 -- # set +x 00:05:03.271 ************************************ 00:05:03.271 START TEST cpu_locks 00:05:03.271 ************************************ 00:05:03.271 14:41:45 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:03.271 * Looking for test storage... 00:05:03.271 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:03.271 14:41:45 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:03.271 14:41:45 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:05:03.271 14:41:45 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:03.271 14:41:46 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:03.271 14:41:46 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:03.271 14:41:46 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:03.271 14:41:46 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:03.271 14:41:46 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:03.271 14:41:46 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:03.271 14:41:46 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:03.271 14:41:46 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:03.271 14:41:46 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:03.271 14:41:46 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:03.271 14:41:46 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:03.271 14:41:46 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:03.271 14:41:46 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:03.271 14:41:46 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:03.271 14:41:46 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:03.271 14:41:46 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:03.271 14:41:46 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:03.271 14:41:46 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:03.271 14:41:46 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:03.271 14:41:46 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:03.271 14:41:46 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:03.271 14:41:46 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:03.271 14:41:46 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:03.271 14:41:46 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:03.271 14:41:46 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:03.271 14:41:46 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:03.271 14:41:46 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:03.271 14:41:46 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:03.271 14:41:46 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:03.271 14:41:46 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:03.271 14:41:46 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:03.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.271 --rc genhtml_branch_coverage=1 00:05:03.271 --rc genhtml_function_coverage=1 00:05:03.271 --rc genhtml_legend=1 00:05:03.271 --rc geninfo_all_blocks=1 00:05:03.271 --rc geninfo_unexecuted_blocks=1 00:05:03.271 00:05:03.271 ' 00:05:03.271 14:41:46 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:03.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.271 --rc genhtml_branch_coverage=1 00:05:03.271 --rc genhtml_function_coverage=1 00:05:03.271 --rc genhtml_legend=1 00:05:03.271 --rc geninfo_all_blocks=1 00:05:03.271 --rc geninfo_unexecuted_blocks=1 00:05:03.271 00:05:03.271 ' 00:05:03.271 14:41:46 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:03.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.271 --rc genhtml_branch_coverage=1 00:05:03.271 --rc genhtml_function_coverage=1 00:05:03.271 --rc genhtml_legend=1 00:05:03.271 --rc geninfo_all_blocks=1 00:05:03.271 --rc geninfo_unexecuted_blocks=1 00:05:03.271 00:05:03.271 ' 00:05:03.271 14:41:46 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:03.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.271 --rc genhtml_branch_coverage=1 00:05:03.271 --rc genhtml_function_coverage=1 00:05:03.271 --rc genhtml_legend=1 00:05:03.271 --rc geninfo_all_blocks=1 00:05:03.271 --rc geninfo_unexecuted_blocks=1 00:05:03.271 00:05:03.271 ' 00:05:03.271 14:41:46 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:03.271 14:41:46 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:03.271 14:41:46 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:03.271 14:41:46 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:03.271 14:41:46 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:03.271 14:41:46 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.271 14:41:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:03.530 ************************************ 00:05:03.530 START TEST default_locks 00:05:03.530 ************************************ 00:05:03.530 14:41:46 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:03.530 14:41:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=551291 00:05:03.530 14:41:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:03.530 14:41:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 551291 00:05:03.530 14:41:46 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 551291 ']' 00:05:03.530 14:41:46 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:03.530 14:41:46 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:03.530 14:41:46 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:03.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:03.530 14:41:46 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:03.530 14:41:46 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:03.530 [2024-12-11 14:41:46.102024] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:05:03.530 [2024-12-11 14:41:46.102106] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid551291 ] 00:05:03.530 [2024-12-11 14:41:46.167275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.530 [2024-12-11 14:41:46.222283] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.788 14:41:46 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:03.788 14:41:46 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:03.788 14:41:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 551291 00:05:03.788 14:41:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 551291 00:05:03.788 14:41:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:04.046 lslocks: write error 00:05:04.046 14:41:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 551291 00:05:04.046 14:41:46 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 551291 ']' 00:05:04.046 14:41:46 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 551291 00:05:04.046 14:41:46 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:04.046 14:41:46 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:04.046 14:41:46 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 551291 00:05:04.046 14:41:46 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:04.046 14:41:46 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:04.046 14:41:46 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 551291' 00:05:04.046 killing process with pid 551291 00:05:04.046 14:41:46 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 551291 00:05:04.046 14:41:46 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 551291 00:05:04.612 14:41:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 551291 00:05:04.612 14:41:47 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:04.612 14:41:47 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 551291 00:05:04.612 14:41:47 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:04.612 14:41:47 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:04.612 14:41:47 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:04.612 14:41:47 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:04.612 14:41:47 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 551291 00:05:04.612 14:41:47 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 551291 ']' 00:05:04.612 14:41:47 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:04.612 14:41:47 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:04.612 14:41:47 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:04.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:04.612 14:41:47 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:04.612 14:41:47 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:04.612 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (551291) - No such process 00:05:04.612 ERROR: process (pid: 551291) is no longer running 00:05:04.612 14:41:47 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:04.612 14:41:47 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:04.612 14:41:47 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:04.612 14:41:47 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:04.612 14:41:47 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:04.612 14:41:47 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:04.612 14:41:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:04.612 14:41:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:04.612 14:41:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:04.612 14:41:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:04.612 00:05:04.612 real 0m1.185s 00:05:04.612 user 0m1.171s 00:05:04.612 sys 0m0.503s 00:05:04.612 14:41:47 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:04.612 14:41:47 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:04.612 ************************************ 00:05:04.612 END TEST default_locks 00:05:04.612 ************************************ 00:05:04.612 14:41:47 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:04.612 14:41:47 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:04.612 14:41:47 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:04.612 14:41:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:04.612 ************************************ 00:05:04.612 START TEST default_locks_via_rpc 00:05:04.612 ************************************ 00:05:04.612 14:41:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:04.612 14:41:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=551453 00:05:04.612 14:41:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:04.612 14:41:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 551453 00:05:04.612 14:41:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 551453 ']' 00:05:04.612 14:41:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:04.612 14:41:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:04.612 14:41:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:04.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:04.612 14:41:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:04.612 14:41:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:04.612 [2024-12-11 14:41:47.342691] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:05:04.612 [2024-12-11 14:41:47.342776] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid551453 ] 00:05:04.871 [2024-12-11 14:41:47.411089] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.871 [2024-12-11 14:41:47.469942] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.129 14:41:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:05.129 14:41:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:05.129 14:41:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:05.129 14:41:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:05.129 14:41:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.129 14:41:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:05.129 14:41:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:05.129 14:41:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:05.129 14:41:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:05.129 14:41:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:05.129 14:41:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:05.129 14:41:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:05.129 14:41:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.129 14:41:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:05.129 14:41:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 551453 00:05:05.129 14:41:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 551453 00:05:05.129 14:41:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:05.387 14:41:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 551453 00:05:05.387 14:41:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 551453 ']' 00:05:05.387 14:41:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 551453 00:05:05.387 14:41:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:05.387 14:41:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:05.387 14:41:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 551453 00:05:05.387 14:41:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:05.387 14:41:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:05.387 14:41:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 551453' 00:05:05.387 killing process with pid 551453 00:05:05.387 14:41:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 551453 00:05:05.387 14:41:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 551453 00:05:05.646 00:05:05.646 real 0m1.122s 00:05:05.646 user 0m1.088s 00:05:05.646 sys 0m0.495s 00:05:05.646 14:41:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:05.646 14:41:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.646 ************************************ 00:05:05.646 END TEST default_locks_via_rpc 00:05:05.646 ************************************ 00:05:05.904 14:41:48 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:05.904 14:41:48 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:05.904 14:41:48 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.904 14:41:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:05.904 ************************************ 00:05:05.904 START TEST non_locking_app_on_locked_coremask 00:05:05.904 ************************************ 00:05:05.904 14:41:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:05.904 14:41:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=551624 00:05:05.904 14:41:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:05.904 14:41:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 551624 /var/tmp/spdk.sock 00:05:05.904 14:41:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 551624 ']' 00:05:05.904 14:41:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:05.904 14:41:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:05.904 14:41:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:05.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:05.904 14:41:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:05.904 14:41:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:05.904 [2024-12-11 14:41:48.516065] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:05:05.904 [2024-12-11 14:41:48.516163] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid551624 ] 00:05:05.904 [2024-12-11 14:41:48.587713] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.904 [2024-12-11 14:41:48.647057] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.163 14:41:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:06.163 14:41:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:06.163 14:41:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=551746 00:05:06.163 14:41:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:06.163 14:41:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 551746 /var/tmp/spdk2.sock 00:05:06.163 14:41:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 551746 ']' 00:05:06.163 14:41:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:06.163 14:41:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:06.163 14:41:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:06.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:06.163 14:41:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:06.163 14:41:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:06.420 [2024-12-11 14:41:48.965639] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:05:06.420 [2024-12-11 14:41:48.965732] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid551746 ] 00:05:06.420 [2024-12-11 14:41:49.063088] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:06.421 [2024-12-11 14:41:49.063113] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.421 [2024-12-11 14:41:49.175316] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.362 14:41:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:07.362 14:41:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:07.362 14:41:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 551624 00:05:07.362 14:41:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 551624 00:05:07.362 14:41:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:07.927 lslocks: write error 00:05:07.927 14:41:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 551624 00:05:07.927 14:41:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 551624 ']' 00:05:07.927 14:41:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 551624 00:05:07.927 14:41:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:07.927 14:41:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:07.927 14:41:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 551624 00:05:07.927 14:41:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:07.927 14:41:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:07.927 14:41:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 551624' 00:05:07.927 killing process with pid 551624 00:05:07.927 14:41:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 551624 00:05:07.927 14:41:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 551624 00:05:08.861 14:41:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 551746 00:05:08.861 14:41:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 551746 ']' 00:05:08.861 14:41:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 551746 00:05:08.861 14:41:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:08.861 14:41:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:08.861 14:41:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 551746 00:05:08.861 14:41:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:08.861 14:41:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:08.861 14:41:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 551746' 00:05:08.861 killing process with pid 551746 00:05:08.861 14:41:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 551746 00:05:08.861 14:41:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 551746 00:05:09.119 00:05:09.119 real 0m3.310s 00:05:09.119 user 0m3.565s 00:05:09.119 sys 0m1.026s 00:05:09.119 14:41:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:09.119 14:41:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:09.119 ************************************ 00:05:09.119 END TEST non_locking_app_on_locked_coremask 00:05:09.119 ************************************ 00:05:09.119 14:41:51 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:09.119 14:41:51 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:09.119 14:41:51 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:09.119 14:41:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:09.119 ************************************ 00:05:09.119 START TEST locking_app_on_unlocked_coremask 00:05:09.119 ************************************ 00:05:09.119 14:41:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:09.119 14:41:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=552072 00:05:09.119 14:41:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 552072 /var/tmp/spdk.sock 00:05:09.119 14:41:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:09.119 14:41:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 552072 ']' 00:05:09.119 14:41:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.119 14:41:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:09.119 14:41:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.119 14:41:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:09.119 14:41:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:09.119 [2024-12-11 14:41:51.877320] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:05:09.119 [2024-12-11 14:41:51.877425] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid552072 ] 00:05:09.377 [2024-12-11 14:41:51.944055] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:09.377 [2024-12-11 14:41:51.944083] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.377 [2024-12-11 14:41:51.998276] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.636 14:41:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:09.636 14:41:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:09.636 14:41:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=552178 00:05:09.636 14:41:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:09.636 14:41:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 552178 /var/tmp/spdk2.sock 00:05:09.636 14:41:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 552178 ']' 00:05:09.636 14:41:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:09.636 14:41:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:09.636 14:41:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:09.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:09.636 14:41:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:09.636 14:41:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:09.636 [2024-12-11 14:41:52.311370] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:05:09.636 [2024-12-11 14:41:52.311453] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid552178 ] 00:05:09.894 [2024-12-11 14:41:52.414504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.894 [2024-12-11 14:41:52.525453] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.827 14:41:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:10.827 14:41:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:10.827 14:41:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 552178 00:05:10.827 14:41:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 552178 00:05:10.827 14:41:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:11.085 lslocks: write error 00:05:11.085 14:41:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 552072 00:05:11.085 14:41:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 552072 ']' 00:05:11.085 14:41:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 552072 00:05:11.085 14:41:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:11.085 14:41:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:11.085 14:41:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 552072 00:05:11.342 14:41:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:11.342 14:41:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:11.342 14:41:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 552072' 00:05:11.342 killing process with pid 552072 00:05:11.342 14:41:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 552072 00:05:11.342 14:41:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 552072 00:05:12.274 14:41:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 552178 00:05:12.274 14:41:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 552178 ']' 00:05:12.274 14:41:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 552178 00:05:12.274 14:41:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:12.274 14:41:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:12.274 14:41:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 552178 00:05:12.274 14:41:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:12.274 14:41:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:12.274 14:41:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 552178' 00:05:12.274 killing process with pid 552178 00:05:12.274 14:41:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 552178 00:05:12.274 14:41:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 552178 00:05:12.532 00:05:12.532 real 0m3.344s 00:05:12.532 user 0m3.542s 00:05:12.532 sys 0m1.073s 00:05:12.532 14:41:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:12.532 14:41:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:12.532 ************************************ 00:05:12.532 END TEST locking_app_on_unlocked_coremask 00:05:12.532 ************************************ 00:05:12.532 14:41:55 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:12.532 14:41:55 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:12.532 14:41:55 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:12.532 14:41:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:12.532 ************************************ 00:05:12.532 START TEST locking_app_on_locked_coremask 00:05:12.532 ************************************ 00:05:12.532 14:41:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:12.532 14:41:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=552543 00:05:12.532 14:41:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:12.532 14:41:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 552543 /var/tmp/spdk.sock 00:05:12.532 14:41:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 552543 ']' 00:05:12.532 14:41:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.532 14:41:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:12.532 14:41:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.532 14:41:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:12.532 14:41:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:12.532 [2024-12-11 14:41:55.270389] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:05:12.532 [2024-12-11 14:41:55.270465] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid552543 ] 00:05:12.789 [2024-12-11 14:41:55.336997] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.789 [2024-12-11 14:41:55.397137] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.047 14:41:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:13.047 14:41:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:13.047 14:41:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=552612 00:05:13.048 14:41:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 552612 /var/tmp/spdk2.sock 00:05:13.048 14:41:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:13.048 14:41:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:13.048 14:41:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 552612 /var/tmp/spdk2.sock 00:05:13.048 14:41:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:13.048 14:41:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:13.048 14:41:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:13.048 14:41:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:13.048 14:41:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 552612 /var/tmp/spdk2.sock 00:05:13.048 14:41:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 552612 ']' 00:05:13.048 14:41:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:13.048 14:41:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:13.048 14:41:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:13.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:13.048 14:41:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:13.048 14:41:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:13.048 [2024-12-11 14:41:55.718401] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:05:13.048 [2024-12-11 14:41:55.718502] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid552612 ] 00:05:13.305 [2024-12-11 14:41:55.823004] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 552543 has claimed it. 00:05:13.305 [2024-12-11 14:41:55.823068] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:13.904 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (552612) - No such process 00:05:13.904 ERROR: process (pid: 552612) is no longer running 00:05:13.904 14:41:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:13.904 14:41:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:13.904 14:41:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:13.904 14:41:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:13.904 14:41:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:13.904 14:41:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:13.904 14:41:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 552543 00:05:13.904 14:41:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 552543 00:05:13.904 14:41:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:14.174 lslocks: write error 00:05:14.174 14:41:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 552543 00:05:14.174 14:41:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 552543 ']' 00:05:14.174 14:41:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 552543 00:05:14.174 14:41:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:14.174 14:41:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:14.174 14:41:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 552543 00:05:14.174 14:41:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:14.174 14:41:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:14.174 14:41:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 552543' 00:05:14.174 killing process with pid 552543 00:05:14.174 14:41:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 552543 00:05:14.174 14:41:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 552543 00:05:14.740 00:05:14.740 real 0m2.045s 00:05:14.740 user 0m2.241s 00:05:14.740 sys 0m0.666s 00:05:14.740 14:41:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:14.740 14:41:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:14.740 ************************************ 00:05:14.740 END TEST locking_app_on_locked_coremask 00:05:14.740 ************************************ 00:05:14.740 14:41:57 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:14.740 14:41:57 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:14.740 14:41:57 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:14.740 14:41:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:14.740 ************************************ 00:05:14.740 START TEST locking_overlapped_coremask 00:05:14.740 ************************************ 00:05:14.740 14:41:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:14.740 14:41:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=552791 00:05:14.740 14:41:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:14.740 14:41:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 552791 /var/tmp/spdk.sock 00:05:14.740 14:41:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 552791 ']' 00:05:14.740 14:41:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:14.740 14:41:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:14.740 14:41:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:14.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:14.740 14:41:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:14.740 14:41:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:14.740 [2024-12-11 14:41:57.368338] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:05:14.740 [2024-12-11 14:41:57.368438] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid552791 ] 00:05:14.740 [2024-12-11 14:41:57.436681] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:14.740 [2024-12-11 14:41:57.498008] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:14.740 [2024-12-11 14:41:57.498076] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.740 [2024-12-11 14:41:57.498073] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:15.307 14:41:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:15.307 14:41:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:15.307 14:41:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=552922 00:05:15.307 14:41:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:15.307 14:41:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 552922 /var/tmp/spdk2.sock 00:05:15.307 14:41:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:15.307 14:41:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 552922 /var/tmp/spdk2.sock 00:05:15.307 14:41:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:15.307 14:41:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:15.307 14:41:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:15.307 14:41:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:15.307 14:41:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 552922 /var/tmp/spdk2.sock 00:05:15.307 14:41:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 552922 ']' 00:05:15.307 14:41:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:15.307 14:41:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:15.307 14:41:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:15.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:15.307 14:41:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:15.307 14:41:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:15.307 [2024-12-11 14:41:57.834504] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:05:15.307 [2024-12-11 14:41:57.834610] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid552922 ] 00:05:15.307 [2024-12-11 14:41:57.937974] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 552791 has claimed it. 00:05:15.307 [2024-12-11 14:41:57.938045] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:15.873 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (552922) - No such process 00:05:15.873 ERROR: process (pid: 552922) is no longer running 00:05:15.873 14:41:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:15.873 14:41:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:15.873 14:41:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:15.873 14:41:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:15.873 14:41:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:15.873 14:41:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:15.873 14:41:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:15.873 14:41:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:15.873 14:41:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:15.873 14:41:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:15.873 14:41:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 552791 00:05:15.873 14:41:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 552791 ']' 00:05:15.873 14:41:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 552791 00:05:15.873 14:41:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:15.873 14:41:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:15.873 14:41:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 552791 00:05:15.873 14:41:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:15.873 14:41:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:15.873 14:41:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 552791' 00:05:15.873 killing process with pid 552791 00:05:15.873 14:41:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 552791 00:05:15.873 14:41:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 552791 00:05:16.439 00:05:16.439 real 0m1.689s 00:05:16.439 user 0m4.697s 00:05:16.439 sys 0m0.467s 00:05:16.439 14:41:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:16.439 14:41:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:16.439 ************************************ 00:05:16.439 END TEST locking_overlapped_coremask 00:05:16.439 ************************************ 00:05:16.439 14:41:59 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:16.439 14:41:59 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:16.439 14:41:59 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:16.439 14:41:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:16.439 ************************************ 00:05:16.439 START TEST locking_overlapped_coremask_via_rpc 00:05:16.439 ************************************ 00:05:16.439 14:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:16.439 14:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=553084 00:05:16.439 14:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:16.439 14:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 553084 /var/tmp/spdk.sock 00:05:16.439 14:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 553084 ']' 00:05:16.439 14:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:16.439 14:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:16.439 14:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:16.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:16.439 14:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:16.439 14:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.439 [2024-12-11 14:41:59.111146] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:05:16.439 [2024-12-11 14:41:59.111238] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid553084 ] 00:05:16.439 [2024-12-11 14:41:59.175668] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:16.439 [2024-12-11 14:41:59.175700] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:16.697 [2024-12-11 14:41:59.233412] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:16.697 [2024-12-11 14:41:59.233523] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:16.697 [2024-12-11 14:41:59.233526] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.955 14:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:16.955 14:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:16.955 14:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=553094 00:05:16.955 14:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:16.955 14:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 553094 /var/tmp/spdk2.sock 00:05:16.955 14:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 553094 ']' 00:05:16.955 14:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:16.955 14:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:16.955 14:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:16.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:16.955 14:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:16.955 14:41:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.956 [2024-12-11 14:41:59.572205] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:05:16.956 [2024-12-11 14:41:59.572295] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid553094 ] 00:05:16.956 [2024-12-11 14:41:59.684638] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:16.956 [2024-12-11 14:41:59.684680] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:17.213 [2024-12-11 14:41:59.806041] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:05:17.213 [2024-12-11 14:41:59.809604] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:05:17.213 [2024-12-11 14:41:59.809617] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:18.146 14:42:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:18.146 14:42:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:18.146 14:42:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:18.146 14:42:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:18.146 14:42:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.146 14:42:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:18.146 14:42:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:18.146 14:42:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:18.146 14:42:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:18.146 14:42:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:18.146 14:42:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:18.146 14:42:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:18.146 14:42:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:18.146 14:42:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:18.146 14:42:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:18.146 14:42:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.146 [2024-12-11 14:42:00.604666] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 553084 has claimed it. 00:05:18.146 request: 00:05:18.146 { 00:05:18.146 "method": "framework_enable_cpumask_locks", 00:05:18.146 "req_id": 1 00:05:18.146 } 00:05:18.146 Got JSON-RPC error response 00:05:18.146 response: 00:05:18.146 { 00:05:18.146 "code": -32603, 00:05:18.146 "message": "Failed to claim CPU core: 2" 00:05:18.146 } 00:05:18.146 14:42:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:18.146 14:42:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:18.146 14:42:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:18.146 14:42:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:18.146 14:42:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:18.146 14:42:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 553084 /var/tmp/spdk.sock 00:05:18.146 14:42:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 553084 ']' 00:05:18.146 14:42:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.146 14:42:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:18.146 14:42:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.146 14:42:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:18.146 14:42:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.146 14:42:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:18.146 14:42:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:18.146 14:42:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 553094 /var/tmp/spdk2.sock 00:05:18.146 14:42:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 553094 ']' 00:05:18.146 14:42:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:18.146 14:42:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:18.146 14:42:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:18.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:18.146 14:42:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:18.146 14:42:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.712 14:42:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:18.712 14:42:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:18.712 14:42:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:18.712 14:42:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:18.712 14:42:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:18.712 14:42:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:18.712 00:05:18.712 real 0m2.132s 00:05:18.712 user 0m1.196s 00:05:18.712 sys 0m0.181s 00:05:18.712 14:42:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:18.712 14:42:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.712 ************************************ 00:05:18.712 END TEST locking_overlapped_coremask_via_rpc 00:05:18.712 ************************************ 00:05:18.712 14:42:01 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:18.712 14:42:01 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 553084 ]] 00:05:18.712 14:42:01 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 553084 00:05:18.712 14:42:01 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 553084 ']' 00:05:18.712 14:42:01 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 553084 00:05:18.712 14:42:01 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:18.712 14:42:01 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:18.712 14:42:01 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 553084 00:05:18.712 14:42:01 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:18.712 14:42:01 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:18.712 14:42:01 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 553084' 00:05:18.712 killing process with pid 553084 00:05:18.712 14:42:01 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 553084 00:05:18.712 14:42:01 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 553084 00:05:18.970 14:42:01 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 553094 ]] 00:05:18.970 14:42:01 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 553094 00:05:18.970 14:42:01 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 553094 ']' 00:05:18.970 14:42:01 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 553094 00:05:18.970 14:42:01 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:18.970 14:42:01 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:18.970 14:42:01 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 553094 00:05:18.970 14:42:01 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:18.970 14:42:01 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:18.970 14:42:01 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 553094' 00:05:18.970 killing process with pid 553094 00:05:18.970 14:42:01 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 553094 00:05:18.970 14:42:01 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 553094 00:05:19.534 14:42:02 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:19.534 14:42:02 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:19.534 14:42:02 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 553084 ]] 00:05:19.534 14:42:02 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 553084 00:05:19.535 14:42:02 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 553084 ']' 00:05:19.535 14:42:02 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 553084 00:05:19.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (553084) - No such process 00:05:19.535 14:42:02 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 553084 is not found' 00:05:19.535 Process with pid 553084 is not found 00:05:19.535 14:42:02 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 553094 ]] 00:05:19.535 14:42:02 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 553094 00:05:19.535 14:42:02 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 553094 ']' 00:05:19.535 14:42:02 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 553094 00:05:19.535 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (553094) - No such process 00:05:19.535 14:42:02 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 553094 is not found' 00:05:19.535 Process with pid 553094 is not found 00:05:19.535 14:42:02 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:19.535 00:05:19.535 real 0m16.248s 00:05:19.535 user 0m29.430s 00:05:19.535 sys 0m5.370s 00:05:19.535 14:42:02 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:19.535 14:42:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:19.535 ************************************ 00:05:19.535 END TEST cpu_locks 00:05:19.535 ************************************ 00:05:19.535 00:05:19.535 real 0m40.939s 00:05:19.535 user 1m20.080s 00:05:19.535 sys 0m9.469s 00:05:19.535 14:42:02 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:19.535 14:42:02 event -- common/autotest_common.sh@10 -- # set +x 00:05:19.535 ************************************ 00:05:19.535 END TEST event 00:05:19.535 ************************************ 00:05:19.535 14:42:02 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:19.535 14:42:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:19.535 14:42:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:19.535 14:42:02 -- common/autotest_common.sh@10 -- # set +x 00:05:19.535 ************************************ 00:05:19.535 START TEST thread 00:05:19.535 ************************************ 00:05:19.535 14:42:02 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:19.535 * Looking for test storage... 00:05:19.535 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:19.535 14:42:02 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:19.535 14:42:02 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:05:19.535 14:42:02 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:19.793 14:42:02 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:19.793 14:42:02 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:19.793 14:42:02 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:19.793 14:42:02 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:19.793 14:42:02 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:19.793 14:42:02 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:19.793 14:42:02 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:19.793 14:42:02 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:19.793 14:42:02 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:19.793 14:42:02 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:19.793 14:42:02 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:19.793 14:42:02 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:19.793 14:42:02 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:19.793 14:42:02 thread -- scripts/common.sh@345 -- # : 1 00:05:19.793 14:42:02 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:19.793 14:42:02 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:19.793 14:42:02 thread -- scripts/common.sh@365 -- # decimal 1 00:05:19.793 14:42:02 thread -- scripts/common.sh@353 -- # local d=1 00:05:19.793 14:42:02 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:19.793 14:42:02 thread -- scripts/common.sh@355 -- # echo 1 00:05:19.793 14:42:02 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:19.793 14:42:02 thread -- scripts/common.sh@366 -- # decimal 2 00:05:19.793 14:42:02 thread -- scripts/common.sh@353 -- # local d=2 00:05:19.793 14:42:02 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:19.793 14:42:02 thread -- scripts/common.sh@355 -- # echo 2 00:05:19.793 14:42:02 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:19.793 14:42:02 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:19.793 14:42:02 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:19.793 14:42:02 thread -- scripts/common.sh@368 -- # return 0 00:05:19.793 14:42:02 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:19.793 14:42:02 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:19.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.793 --rc genhtml_branch_coverage=1 00:05:19.793 --rc genhtml_function_coverage=1 00:05:19.793 --rc genhtml_legend=1 00:05:19.793 --rc geninfo_all_blocks=1 00:05:19.793 --rc geninfo_unexecuted_blocks=1 00:05:19.793 00:05:19.793 ' 00:05:19.793 14:42:02 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:19.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.793 --rc genhtml_branch_coverage=1 00:05:19.793 --rc genhtml_function_coverage=1 00:05:19.793 --rc genhtml_legend=1 00:05:19.793 --rc geninfo_all_blocks=1 00:05:19.793 --rc geninfo_unexecuted_blocks=1 00:05:19.793 00:05:19.793 ' 00:05:19.793 14:42:02 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:19.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.793 --rc genhtml_branch_coverage=1 00:05:19.793 --rc genhtml_function_coverage=1 00:05:19.793 --rc genhtml_legend=1 00:05:19.793 --rc geninfo_all_blocks=1 00:05:19.793 --rc geninfo_unexecuted_blocks=1 00:05:19.793 00:05:19.793 ' 00:05:19.793 14:42:02 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:19.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.793 --rc genhtml_branch_coverage=1 00:05:19.793 --rc genhtml_function_coverage=1 00:05:19.793 --rc genhtml_legend=1 00:05:19.793 --rc geninfo_all_blocks=1 00:05:19.793 --rc geninfo_unexecuted_blocks=1 00:05:19.793 00:05:19.793 ' 00:05:19.793 14:42:02 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:19.793 14:42:02 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:19.793 14:42:02 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:19.793 14:42:02 thread -- common/autotest_common.sh@10 -- # set +x 00:05:19.793 ************************************ 00:05:19.793 START TEST thread_poller_perf 00:05:19.793 ************************************ 00:05:19.793 14:42:02 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:19.793 [2024-12-11 14:42:02.377503] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:05:19.793 [2024-12-11 14:42:02.377613] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid553705 ] 00:05:19.793 [2024-12-11 14:42:02.446326] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.793 [2024-12-11 14:42:02.504991] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.793 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:21.164 [2024-12-11T13:42:03.937Z] ====================================== 00:05:21.164 [2024-12-11T13:42:03.937Z] busy:2710022169 (cyc) 00:05:21.164 [2024-12-11T13:42:03.937Z] total_run_count: 364000 00:05:21.164 [2024-12-11T13:42:03.937Z] tsc_hz: 2700000000 (cyc) 00:05:21.164 [2024-12-11T13:42:03.937Z] ====================================== 00:05:21.164 [2024-12-11T13:42:03.937Z] poller_cost: 7445 (cyc), 2757 (nsec) 00:05:21.164 00:05:21.164 real 0m1.215s 00:05:21.164 user 0m1.140s 00:05:21.164 sys 0m0.069s 00:05:21.164 14:42:03 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:21.164 14:42:03 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:21.164 ************************************ 00:05:21.164 END TEST thread_poller_perf 00:05:21.164 ************************************ 00:05:21.164 14:42:03 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:21.164 14:42:03 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:21.165 14:42:03 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:21.165 14:42:03 thread -- common/autotest_common.sh@10 -- # set +x 00:05:21.165 ************************************ 00:05:21.165 START TEST thread_poller_perf 00:05:21.165 ************************************ 00:05:21.165 14:42:03 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:21.165 [2024-12-11 14:42:03.644371] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:05:21.165 [2024-12-11 14:42:03.644440] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid553859 ] 00:05:21.165 [2024-12-11 14:42:03.711232] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.165 [2024-12-11 14:42:03.764273] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.165 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:22.097 [2024-12-11T13:42:04.870Z] ====================================== 00:05:22.097 [2024-12-11T13:42:04.870Z] busy:2702004066 (cyc) 00:05:22.097 [2024-12-11T13:42:04.870Z] total_run_count: 4431000 00:05:22.097 [2024-12-11T13:42:04.870Z] tsc_hz: 2700000000 (cyc) 00:05:22.097 [2024-12-11T13:42:04.870Z] ====================================== 00:05:22.097 [2024-12-11T13:42:04.870Z] poller_cost: 609 (cyc), 225 (nsec) 00:05:22.097 00:05:22.097 real 0m1.198s 00:05:22.097 user 0m1.126s 00:05:22.097 sys 0m0.066s 00:05:22.097 14:42:04 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:22.097 14:42:04 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:22.097 ************************************ 00:05:22.097 END TEST thread_poller_perf 00:05:22.097 ************************************ 00:05:22.097 14:42:04 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:22.097 00:05:22.097 real 0m2.662s 00:05:22.097 user 0m2.408s 00:05:22.097 sys 0m0.257s 00:05:22.097 14:42:04 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:22.097 14:42:04 thread -- common/autotest_common.sh@10 -- # set +x 00:05:22.097 ************************************ 00:05:22.097 END TEST thread 00:05:22.097 ************************************ 00:05:22.355 14:42:04 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:22.355 14:42:04 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:22.355 14:42:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:22.355 14:42:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:22.355 14:42:04 -- common/autotest_common.sh@10 -- # set +x 00:05:22.355 ************************************ 00:05:22.355 START TEST app_cmdline 00:05:22.355 ************************************ 00:05:22.355 14:42:04 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:22.356 * Looking for test storage... 00:05:22.356 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:22.356 14:42:04 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:22.356 14:42:04 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:05:22.356 14:42:04 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:22.356 14:42:05 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:22.356 14:42:05 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:22.356 14:42:05 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:22.356 14:42:05 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:22.356 14:42:05 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:22.356 14:42:05 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:22.356 14:42:05 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:22.356 14:42:05 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:22.356 14:42:05 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:22.356 14:42:05 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:22.356 14:42:05 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:22.356 14:42:05 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:22.356 14:42:05 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:22.356 14:42:05 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:22.356 14:42:05 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:22.356 14:42:05 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:22.356 14:42:05 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:22.356 14:42:05 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:22.356 14:42:05 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:22.356 14:42:05 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:22.356 14:42:05 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:22.356 14:42:05 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:22.356 14:42:05 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:22.356 14:42:05 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:22.356 14:42:05 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:22.356 14:42:05 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:22.356 14:42:05 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:22.356 14:42:05 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:22.356 14:42:05 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:22.356 14:42:05 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:22.356 14:42:05 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:22.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.356 --rc genhtml_branch_coverage=1 00:05:22.356 --rc genhtml_function_coverage=1 00:05:22.356 --rc genhtml_legend=1 00:05:22.356 --rc geninfo_all_blocks=1 00:05:22.356 --rc geninfo_unexecuted_blocks=1 00:05:22.356 00:05:22.356 ' 00:05:22.356 14:42:05 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:22.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.356 --rc genhtml_branch_coverage=1 00:05:22.356 --rc genhtml_function_coverage=1 00:05:22.356 --rc genhtml_legend=1 00:05:22.356 --rc geninfo_all_blocks=1 00:05:22.356 --rc geninfo_unexecuted_blocks=1 00:05:22.356 00:05:22.356 ' 00:05:22.356 14:42:05 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:22.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.356 --rc genhtml_branch_coverage=1 00:05:22.356 --rc genhtml_function_coverage=1 00:05:22.356 --rc genhtml_legend=1 00:05:22.356 --rc geninfo_all_blocks=1 00:05:22.356 --rc geninfo_unexecuted_blocks=1 00:05:22.356 00:05:22.356 ' 00:05:22.356 14:42:05 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:22.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.356 --rc genhtml_branch_coverage=1 00:05:22.356 --rc genhtml_function_coverage=1 00:05:22.356 --rc genhtml_legend=1 00:05:22.356 --rc geninfo_all_blocks=1 00:05:22.356 --rc geninfo_unexecuted_blocks=1 00:05:22.356 00:05:22.356 ' 00:05:22.356 14:42:05 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:22.356 14:42:05 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=554066 00:05:22.356 14:42:05 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:22.356 14:42:05 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 554066 00:05:22.356 14:42:05 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 554066 ']' 00:05:22.356 14:42:05 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.356 14:42:05 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:22.356 14:42:05 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.356 14:42:05 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:22.356 14:42:05 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:22.356 [2024-12-11 14:42:05.091315] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:05:22.356 [2024-12-11 14:42:05.091404] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid554066 ] 00:05:22.614 [2024-12-11 14:42:05.159155] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.614 [2024-12-11 14:42:05.220175] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.872 14:42:05 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:22.872 14:42:05 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:05:22.872 14:42:05 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:23.130 { 00:05:23.130 "version": "SPDK v25.01-pre git sha1 3aefe4228", 00:05:23.130 "fields": { 00:05:23.130 "major": 25, 00:05:23.130 "minor": 1, 00:05:23.130 "patch": 0, 00:05:23.130 "suffix": "-pre", 00:05:23.130 "commit": "3aefe4228" 00:05:23.130 } 00:05:23.130 } 00:05:23.130 14:42:05 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:23.131 14:42:05 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:23.131 14:42:05 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:23.131 14:42:05 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:23.131 14:42:05 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:23.131 14:42:05 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:23.131 14:42:05 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:23.131 14:42:05 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:23.131 14:42:05 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:23.131 14:42:05 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:23.131 14:42:05 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:23.131 14:42:05 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:23.131 14:42:05 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:23.131 14:42:05 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:05:23.131 14:42:05 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:23.131 14:42:05 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:23.131 14:42:05 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:23.131 14:42:05 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:23.131 14:42:05 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:23.131 14:42:05 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:23.131 14:42:05 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:23.131 14:42:05 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:23.131 14:42:05 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:23.131 14:42:05 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:23.388 request: 00:05:23.388 { 00:05:23.388 "method": "env_dpdk_get_mem_stats", 00:05:23.388 "req_id": 1 00:05:23.388 } 00:05:23.388 Got JSON-RPC error response 00:05:23.388 response: 00:05:23.388 { 00:05:23.388 "code": -32601, 00:05:23.388 "message": "Method not found" 00:05:23.389 } 00:05:23.389 14:42:06 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:05:23.389 14:42:06 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:23.389 14:42:06 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:23.389 14:42:06 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:23.389 14:42:06 app_cmdline -- app/cmdline.sh@1 -- # killprocess 554066 00:05:23.389 14:42:06 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 554066 ']' 00:05:23.389 14:42:06 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 554066 00:05:23.389 14:42:06 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:05:23.389 14:42:06 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:23.389 14:42:06 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 554066 00:05:23.389 14:42:06 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:23.389 14:42:06 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:23.389 14:42:06 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 554066' 00:05:23.389 killing process with pid 554066 00:05:23.389 14:42:06 app_cmdline -- common/autotest_common.sh@973 -- # kill 554066 00:05:23.389 14:42:06 app_cmdline -- common/autotest_common.sh@978 -- # wait 554066 00:05:23.955 00:05:23.955 real 0m1.615s 00:05:23.955 user 0m2.017s 00:05:23.955 sys 0m0.489s 00:05:23.955 14:42:06 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:23.955 14:42:06 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:23.955 ************************************ 00:05:23.955 END TEST app_cmdline 00:05:23.955 ************************************ 00:05:23.955 14:42:06 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:23.955 14:42:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:23.955 14:42:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:23.955 14:42:06 -- common/autotest_common.sh@10 -- # set +x 00:05:23.955 ************************************ 00:05:23.955 START TEST version 00:05:23.955 ************************************ 00:05:23.955 14:42:06 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:23.955 * Looking for test storage... 00:05:23.955 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:23.955 14:42:06 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:23.955 14:42:06 version -- common/autotest_common.sh@1711 -- # lcov --version 00:05:23.955 14:42:06 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:23.955 14:42:06 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:23.955 14:42:06 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:23.955 14:42:06 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:23.955 14:42:06 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:23.955 14:42:06 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:23.955 14:42:06 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:23.955 14:42:06 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:23.955 14:42:06 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:23.955 14:42:06 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:23.955 14:42:06 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:23.955 14:42:06 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:23.955 14:42:06 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:23.955 14:42:06 version -- scripts/common.sh@344 -- # case "$op" in 00:05:23.955 14:42:06 version -- scripts/common.sh@345 -- # : 1 00:05:23.955 14:42:06 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:23.955 14:42:06 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:23.955 14:42:06 version -- scripts/common.sh@365 -- # decimal 1 00:05:23.955 14:42:06 version -- scripts/common.sh@353 -- # local d=1 00:05:23.955 14:42:06 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:23.955 14:42:06 version -- scripts/common.sh@355 -- # echo 1 00:05:23.955 14:42:06 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:23.955 14:42:06 version -- scripts/common.sh@366 -- # decimal 2 00:05:23.955 14:42:06 version -- scripts/common.sh@353 -- # local d=2 00:05:23.955 14:42:06 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:23.955 14:42:06 version -- scripts/common.sh@355 -- # echo 2 00:05:23.955 14:42:06 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:23.955 14:42:06 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:23.955 14:42:06 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:23.955 14:42:06 version -- scripts/common.sh@368 -- # return 0 00:05:23.955 14:42:06 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:23.955 14:42:06 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:23.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.955 --rc genhtml_branch_coverage=1 00:05:23.955 --rc genhtml_function_coverage=1 00:05:23.955 --rc genhtml_legend=1 00:05:23.955 --rc geninfo_all_blocks=1 00:05:23.955 --rc geninfo_unexecuted_blocks=1 00:05:23.955 00:05:23.955 ' 00:05:23.955 14:42:06 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:23.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.955 --rc genhtml_branch_coverage=1 00:05:23.955 --rc genhtml_function_coverage=1 00:05:23.955 --rc genhtml_legend=1 00:05:23.955 --rc geninfo_all_blocks=1 00:05:23.955 --rc geninfo_unexecuted_blocks=1 00:05:23.955 00:05:23.955 ' 00:05:23.955 14:42:06 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:23.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.955 --rc genhtml_branch_coverage=1 00:05:23.956 --rc genhtml_function_coverage=1 00:05:23.956 --rc genhtml_legend=1 00:05:23.956 --rc geninfo_all_blocks=1 00:05:23.956 --rc geninfo_unexecuted_blocks=1 00:05:23.956 00:05:23.956 ' 00:05:23.956 14:42:06 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:23.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.956 --rc genhtml_branch_coverage=1 00:05:23.956 --rc genhtml_function_coverage=1 00:05:23.956 --rc genhtml_legend=1 00:05:23.956 --rc geninfo_all_blocks=1 00:05:23.956 --rc geninfo_unexecuted_blocks=1 00:05:23.956 00:05:23.956 ' 00:05:23.956 14:42:06 version -- app/version.sh@17 -- # get_header_version major 00:05:23.956 14:42:06 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:23.956 14:42:06 version -- app/version.sh@14 -- # cut -f2 00:05:23.956 14:42:06 version -- app/version.sh@14 -- # tr -d '"' 00:05:23.956 14:42:06 version -- app/version.sh@17 -- # major=25 00:05:23.956 14:42:06 version -- app/version.sh@18 -- # get_header_version minor 00:05:23.956 14:42:06 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:23.956 14:42:06 version -- app/version.sh@14 -- # cut -f2 00:05:23.956 14:42:06 version -- app/version.sh@14 -- # tr -d '"' 00:05:23.956 14:42:06 version -- app/version.sh@18 -- # minor=1 00:05:23.956 14:42:06 version -- app/version.sh@19 -- # get_header_version patch 00:05:23.956 14:42:06 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:23.956 14:42:06 version -- app/version.sh@14 -- # cut -f2 00:05:23.956 14:42:06 version -- app/version.sh@14 -- # tr -d '"' 00:05:23.956 14:42:06 version -- app/version.sh@19 -- # patch=0 00:05:24.215 14:42:06 version -- app/version.sh@20 -- # get_header_version suffix 00:05:24.215 14:42:06 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:24.215 14:42:06 version -- app/version.sh@14 -- # cut -f2 00:05:24.215 14:42:06 version -- app/version.sh@14 -- # tr -d '"' 00:05:24.215 14:42:06 version -- app/version.sh@20 -- # suffix=-pre 00:05:24.215 14:42:06 version -- app/version.sh@22 -- # version=25.1 00:05:24.215 14:42:06 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:24.215 14:42:06 version -- app/version.sh@28 -- # version=25.1rc0 00:05:24.215 14:42:06 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:24.215 14:42:06 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:24.215 14:42:06 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:24.215 14:42:06 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:24.215 00:05:24.215 real 0m0.203s 00:05:24.215 user 0m0.128s 00:05:24.215 sys 0m0.100s 00:05:24.215 14:42:06 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:24.215 14:42:06 version -- common/autotest_common.sh@10 -- # set +x 00:05:24.215 ************************************ 00:05:24.215 END TEST version 00:05:24.215 ************************************ 00:05:24.215 14:42:06 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:24.215 14:42:06 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:24.215 14:42:06 -- spdk/autotest.sh@194 -- # uname -s 00:05:24.215 14:42:06 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:24.215 14:42:06 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:24.215 14:42:06 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:24.215 14:42:06 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:05:24.215 14:42:06 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:05:24.215 14:42:06 -- spdk/autotest.sh@260 -- # timing_exit lib 00:05:24.215 14:42:06 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:24.215 14:42:06 -- common/autotest_common.sh@10 -- # set +x 00:05:24.215 14:42:06 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:05:24.215 14:42:06 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:05:24.215 14:42:06 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:05:24.215 14:42:06 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:05:24.215 14:42:06 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:05:24.215 14:42:06 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:05:24.215 14:42:06 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:24.215 14:42:06 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:24.215 14:42:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:24.215 14:42:06 -- common/autotest_common.sh@10 -- # set +x 00:05:24.215 ************************************ 00:05:24.215 START TEST nvmf_tcp 00:05:24.215 ************************************ 00:05:24.215 14:42:06 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:24.215 * Looking for test storage... 00:05:24.215 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:24.215 14:42:06 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:24.215 14:42:06 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:05:24.215 14:42:06 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:24.215 14:42:06 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:24.215 14:42:06 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:24.215 14:42:06 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:24.215 14:42:06 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:24.215 14:42:06 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:24.215 14:42:06 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:24.215 14:42:06 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:24.215 14:42:06 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:24.215 14:42:06 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:24.215 14:42:06 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:24.215 14:42:06 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:24.215 14:42:06 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:24.215 14:42:06 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:24.215 14:42:06 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:05:24.215 14:42:06 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:24.215 14:42:06 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:24.215 14:42:06 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:24.215 14:42:06 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:05:24.215 14:42:06 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:24.215 14:42:06 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:05:24.215 14:42:06 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:24.215 14:42:06 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:24.215 14:42:06 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:05:24.215 14:42:06 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:24.215 14:42:06 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:05:24.215 14:42:06 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:24.215 14:42:06 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:24.215 14:42:06 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:24.215 14:42:06 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:05:24.215 14:42:06 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:24.215 14:42:06 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:24.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.215 --rc genhtml_branch_coverage=1 00:05:24.215 --rc genhtml_function_coverage=1 00:05:24.215 --rc genhtml_legend=1 00:05:24.215 --rc geninfo_all_blocks=1 00:05:24.215 --rc geninfo_unexecuted_blocks=1 00:05:24.215 00:05:24.215 ' 00:05:24.215 14:42:06 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:24.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.215 --rc genhtml_branch_coverage=1 00:05:24.215 --rc genhtml_function_coverage=1 00:05:24.215 --rc genhtml_legend=1 00:05:24.215 --rc geninfo_all_blocks=1 00:05:24.215 --rc geninfo_unexecuted_blocks=1 00:05:24.215 00:05:24.215 ' 00:05:24.215 14:42:06 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:24.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.215 --rc genhtml_branch_coverage=1 00:05:24.215 --rc genhtml_function_coverage=1 00:05:24.215 --rc genhtml_legend=1 00:05:24.215 --rc geninfo_all_blocks=1 00:05:24.215 --rc geninfo_unexecuted_blocks=1 00:05:24.215 00:05:24.215 ' 00:05:24.215 14:42:06 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:24.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.215 --rc genhtml_branch_coverage=1 00:05:24.215 --rc genhtml_function_coverage=1 00:05:24.215 --rc genhtml_legend=1 00:05:24.215 --rc geninfo_all_blocks=1 00:05:24.215 --rc geninfo_unexecuted_blocks=1 00:05:24.215 00:05:24.215 ' 00:05:24.215 14:42:06 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:24.215 14:42:06 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:24.215 14:42:06 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:24.215 14:42:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:24.215 14:42:06 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:24.215 14:42:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:24.474 ************************************ 00:05:24.474 START TEST nvmf_target_core 00:05:24.474 ************************************ 00:05:24.474 14:42:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:24.474 * Looking for test storage... 00:05:24.474 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:24.474 14:42:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:24.474 14:42:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:05:24.474 14:42:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:24.474 14:42:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:24.474 14:42:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:24.474 14:42:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:24.474 14:42:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:24.474 14:42:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:05:24.474 14:42:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:05:24.474 14:42:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:05:24.474 14:42:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:05:24.474 14:42:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:05:24.474 14:42:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:05:24.474 14:42:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:05:24.474 14:42:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:24.474 14:42:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:05:24.474 14:42:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:05:24.474 14:42:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:24.474 14:42:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:24.474 14:42:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:05:24.474 14:42:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:05:24.474 14:42:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:24.474 14:42:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:05:24.474 14:42:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:05:24.474 14:42:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:05:24.474 14:42:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:05:24.474 14:42:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:24.474 14:42:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:05:24.474 14:42:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:05:24.474 14:42:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:24.474 14:42:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:24.474 14:42:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:05:24.474 14:42:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:24.474 14:42:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:24.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.474 --rc genhtml_branch_coverage=1 00:05:24.474 --rc genhtml_function_coverage=1 00:05:24.474 --rc genhtml_legend=1 00:05:24.474 --rc geninfo_all_blocks=1 00:05:24.474 --rc geninfo_unexecuted_blocks=1 00:05:24.474 00:05:24.474 ' 00:05:24.474 14:42:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:24.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.474 --rc genhtml_branch_coverage=1 00:05:24.474 --rc genhtml_function_coverage=1 00:05:24.474 --rc genhtml_legend=1 00:05:24.474 --rc geninfo_all_blocks=1 00:05:24.474 --rc geninfo_unexecuted_blocks=1 00:05:24.474 00:05:24.474 ' 00:05:24.474 14:42:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:24.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.474 --rc genhtml_branch_coverage=1 00:05:24.474 --rc genhtml_function_coverage=1 00:05:24.474 --rc genhtml_legend=1 00:05:24.474 --rc geninfo_all_blocks=1 00:05:24.474 --rc geninfo_unexecuted_blocks=1 00:05:24.474 00:05:24.474 ' 00:05:24.474 14:42:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:24.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.474 --rc genhtml_branch_coverage=1 00:05:24.474 --rc genhtml_function_coverage=1 00:05:24.474 --rc genhtml_legend=1 00:05:24.474 --rc geninfo_all_blocks=1 00:05:24.474 --rc geninfo_unexecuted_blocks=1 00:05:24.474 00:05:24.474 ' 00:05:24.474 14:42:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:05:24.475 14:42:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:24.475 14:42:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:24.475 14:42:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:05:24.475 14:42:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:24.475 14:42:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:24.475 14:42:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:24.475 14:42:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:24.475 14:42:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:24.475 14:42:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:24.475 14:42:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:24.475 14:42:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:24.475 14:42:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:24.475 14:42:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:24.475 14:42:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:24.475 14:42:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:24.475 14:42:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:24.475 14:42:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:24.475 14:42:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:24.475 14:42:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:24.475 14:42:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:24.475 14:42:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:05:24.475 14:42:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:24.475 14:42:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:24.475 14:42:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:24.475 14:42:07 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.475 14:42:07 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.475 14:42:07 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.475 14:42:07 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:05:24.475 14:42:07 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.475 14:42:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:05:24.475 14:42:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:24.475 14:42:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:24.475 14:42:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:24.475 14:42:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:24.475 14:42:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:24.475 14:42:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:24.475 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:24.475 14:42:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:24.475 14:42:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:24.475 14:42:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:24.475 14:42:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:24.475 14:42:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:05:24.475 14:42:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:05:24.475 14:42:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:24.475 14:42:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:24.475 14:42:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:24.475 14:42:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:24.475 ************************************ 00:05:24.475 START TEST nvmf_abort 00:05:24.475 ************************************ 00:05:24.475 14:42:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:24.475 * Looking for test storage... 00:05:24.475 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:24.475 14:42:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:24.475 14:42:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:05:24.475 14:42:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:24.733 14:42:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:24.733 14:42:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:24.733 14:42:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:24.733 14:42:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:24.733 14:42:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:05:24.734 14:42:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:05:24.734 14:42:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:05:24.734 14:42:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:05:24.734 14:42:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:05:24.734 14:42:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:05:24.734 14:42:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:05:24.734 14:42:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:24.734 14:42:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:05:24.734 14:42:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:05:24.734 14:42:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:24.734 14:42:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:24.734 14:42:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:05:24.734 14:42:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:05:24.734 14:42:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:24.734 14:42:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:05:24.734 14:42:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:05:24.734 14:42:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:05:24.734 14:42:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:05:24.734 14:42:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:24.734 14:42:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:05:24.734 14:42:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:05:24.734 14:42:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:24.734 14:42:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:24.734 14:42:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:05:24.734 14:42:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:24.734 14:42:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:24.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.734 --rc genhtml_branch_coverage=1 00:05:24.734 --rc genhtml_function_coverage=1 00:05:24.734 --rc genhtml_legend=1 00:05:24.734 --rc geninfo_all_blocks=1 00:05:24.734 --rc geninfo_unexecuted_blocks=1 00:05:24.734 00:05:24.734 ' 00:05:24.734 14:42:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:24.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.734 --rc genhtml_branch_coverage=1 00:05:24.734 --rc genhtml_function_coverage=1 00:05:24.734 --rc genhtml_legend=1 00:05:24.734 --rc geninfo_all_blocks=1 00:05:24.734 --rc geninfo_unexecuted_blocks=1 00:05:24.734 00:05:24.734 ' 00:05:24.734 14:42:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:24.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.734 --rc genhtml_branch_coverage=1 00:05:24.734 --rc genhtml_function_coverage=1 00:05:24.734 --rc genhtml_legend=1 00:05:24.734 --rc geninfo_all_blocks=1 00:05:24.734 --rc geninfo_unexecuted_blocks=1 00:05:24.734 00:05:24.734 ' 00:05:24.734 14:42:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:24.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.734 --rc genhtml_branch_coverage=1 00:05:24.734 --rc genhtml_function_coverage=1 00:05:24.734 --rc genhtml_legend=1 00:05:24.734 --rc geninfo_all_blocks=1 00:05:24.734 --rc geninfo_unexecuted_blocks=1 00:05:24.734 00:05:24.734 ' 00:05:24.734 14:42:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:24.734 14:42:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:05:24.734 14:42:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:24.734 14:42:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:24.734 14:42:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:24.734 14:42:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:24.734 14:42:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:24.734 14:42:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:24.734 14:42:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:24.734 14:42:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:24.734 14:42:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:24.734 14:42:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:24.734 14:42:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:24.734 14:42:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:24.734 14:42:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:24.734 14:42:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:24.734 14:42:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:24.734 14:42:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:24.734 14:42:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:24.734 14:42:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:05:24.734 14:42:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:24.734 14:42:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:24.734 14:42:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:24.734 14:42:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.734 14:42:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.734 14:42:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.734 14:42:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:05:24.734 14:42:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.734 14:42:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:05:24.734 14:42:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:24.734 14:42:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:24.734 14:42:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:24.734 14:42:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:24.734 14:42:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:24.734 14:42:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:24.734 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:24.734 14:42:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:24.734 14:42:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:24.734 14:42:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:24.734 14:42:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:24.734 14:42:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:05:24.734 14:42:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:05:24.734 14:42:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:24.734 14:42:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:24.734 14:42:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:24.734 14:42:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:24.734 14:42:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:24.734 14:42:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:24.734 14:42:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:24.734 14:42:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:24.734 14:42:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:24.734 14:42:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:24.734 14:42:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:05:24.734 14:42:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:27.266 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:27.266 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:05:27.266 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:27.266 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:27.266 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:27.266 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:27.266 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:27.266 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:05:27.266 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:27.266 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:05:27.266 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:05:27.266 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:05:27.266 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:05:27.266 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:05:27.266 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:05:27.266 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:27.266 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:27.266 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:27.266 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:27.266 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:27.266 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:27.266 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:27.266 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:27.266 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:27.266 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:27.266 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:27.266 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:27.266 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:27.266 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:27.266 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:27.266 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:27.266 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:27.266 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:27.266 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:27.266 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:05:27.266 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:05:27.266 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:27.266 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:27.266 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:27.266 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:27.266 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:27.266 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:27.266 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:05:27.266 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:05:27.266 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:27.266 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:27.266 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:27.266 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:27.266 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:27.266 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:27.266 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:27.266 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:27.266 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:27.266 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:27.266 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:27.266 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:27.266 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:27.266 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:27.266 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:27.266 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:05:27.266 Found net devices under 0000:0a:00.0: cvl_0_0 00:05:27.266 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:27.267 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:27.267 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:27.267 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:27.267 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:27.267 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:27.267 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:27.267 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:27.267 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:05:27.267 Found net devices under 0000:0a:00.1: cvl_0_1 00:05:27.267 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:27.267 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:27.267 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:05:27.267 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:27.267 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:27.267 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:27.267 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:27.267 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:27.267 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:27.267 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:27.267 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:27.267 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:27.267 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:27.267 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:27.267 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:27.267 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:27.267 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:27.267 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:27.267 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:27.267 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:27.267 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:27.267 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:27.267 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:27.267 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:27.267 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:27.267 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:27.267 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:27.267 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:27.267 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:27.267 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:27.267 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.324 ms 00:05:27.267 00:05:27.267 --- 10.0.0.2 ping statistics --- 00:05:27.267 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:27.267 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:05:27.267 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:27.267 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:27.267 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:05:27.267 00:05:27.267 --- 10.0.0.1 ping statistics --- 00:05:27.267 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:27.267 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:05:27.267 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:27.267 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:05:27.267 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:27.267 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:27.267 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:27.267 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:27.267 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:27.267 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:27.267 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:27.267 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:05:27.267 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:27.267 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:27.267 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:27.267 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=556657 00:05:27.267 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 556657 00:05:27.267 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:27.267 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 556657 ']' 00:05:27.267 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.267 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:27.267 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.267 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:27.267 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:27.267 [2024-12-11 14:42:09.718239] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:05:27.267 [2024-12-11 14:42:09.718329] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:27.267 [2024-12-11 14:42:09.797238] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:27.267 [2024-12-11 14:42:09.858086] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:27.267 [2024-12-11 14:42:09.858138] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:27.267 [2024-12-11 14:42:09.858166] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:27.267 [2024-12-11 14:42:09.858178] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:27.267 [2024-12-11 14:42:09.858188] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:27.267 [2024-12-11 14:42:09.859781] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:27.267 [2024-12-11 14:42:09.859855] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:05:27.267 [2024-12-11 14:42:09.859859] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:27.267 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:27.267 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:05:27.267 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:27.267 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:27.267 14:42:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:27.267 14:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:27.267 14:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:05:27.267 14:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:27.267 14:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:27.267 [2024-12-11 14:42:10.004362] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:27.267 14:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:27.267 14:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:05:27.267 14:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:27.267 14:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:27.526 Malloc0 00:05:27.526 14:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:27.526 14:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:27.526 14:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:27.526 14:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:27.526 Delay0 00:05:27.526 14:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:27.526 14:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:05:27.526 14:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:27.526 14:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:27.526 14:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:27.526 14:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:05:27.526 14:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:27.526 14:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:27.526 14:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:27.526 14:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:05:27.526 14:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:27.526 14:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:27.526 [2024-12-11 14:42:10.078658] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:27.526 14:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:27.526 14:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:27.526 14:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:27.526 14:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:27.526 14:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:27.526 14:42:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:05:27.526 [2024-12-11 14:42:10.193375] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:05:30.057 Initializing NVMe Controllers 00:05:30.057 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:05:30.057 controller IO queue size 128 less than required 00:05:30.057 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:05:30.057 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:05:30.057 Initialization complete. Launching workers. 00:05:30.057 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28608 00:05:30.057 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28669, failed to submit 62 00:05:30.058 success 28612, unsuccessful 57, failed 0 00:05:30.058 14:42:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:05:30.058 14:42:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.058 14:42:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:30.058 14:42:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.058 14:42:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:05:30.058 14:42:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:05:30.058 14:42:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:30.058 14:42:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:05:30.058 14:42:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:30.058 14:42:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:05:30.058 14:42:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:30.058 14:42:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:30.058 rmmod nvme_tcp 00:05:30.058 rmmod nvme_fabrics 00:05:30.058 rmmod nvme_keyring 00:05:30.058 14:42:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:30.058 14:42:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:05:30.058 14:42:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:05:30.058 14:42:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 556657 ']' 00:05:30.058 14:42:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 556657 00:05:30.058 14:42:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 556657 ']' 00:05:30.058 14:42:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 556657 00:05:30.058 14:42:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:05:30.058 14:42:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:30.058 14:42:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 556657 00:05:30.058 14:42:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:05:30.058 14:42:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:05:30.058 14:42:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 556657' 00:05:30.058 killing process with pid 556657 00:05:30.058 14:42:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 556657 00:05:30.058 14:42:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 556657 00:05:30.058 14:42:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:30.058 14:42:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:30.058 14:42:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:30.058 14:42:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:05:30.058 14:42:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:05:30.058 14:42:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:30.058 14:42:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:05:30.058 14:42:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:30.058 14:42:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:30.058 14:42:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:30.058 14:42:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:30.058 14:42:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:32.609 14:42:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:32.609 00:05:32.609 real 0m7.577s 00:05:32.609 user 0m10.975s 00:05:32.609 sys 0m2.707s 00:05:32.609 14:42:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:32.609 14:42:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:32.609 ************************************ 00:05:32.609 END TEST nvmf_abort 00:05:32.609 ************************************ 00:05:32.609 14:42:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:32.609 14:42:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:32.609 14:42:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:32.609 14:42:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:32.609 ************************************ 00:05:32.609 START TEST nvmf_ns_hotplug_stress 00:05:32.609 ************************************ 00:05:32.609 14:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:32.609 * Looking for test storage... 00:05:32.609 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:32.609 14:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:32.609 14:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:05:32.609 14:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:32.609 14:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:32.609 14:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:32.609 14:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:32.610 14:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:32.610 14:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:05:32.610 14:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:05:32.610 14:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:05:32.610 14:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:05:32.610 14:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:05:32.610 14:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:05:32.610 14:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:05:32.610 14:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:32.610 14:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:05:32.610 14:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:05:32.610 14:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:32.610 14:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:32.610 14:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:05:32.610 14:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:05:32.610 14:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:32.610 14:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:05:32.610 14:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:05:32.610 14:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:05:32.610 14:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:05:32.610 14:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:32.610 14:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:05:32.610 14:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:05:32.610 14:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:32.610 14:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:32.610 14:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:05:32.610 14:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:32.610 14:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:32.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.610 --rc genhtml_branch_coverage=1 00:05:32.610 --rc genhtml_function_coverage=1 00:05:32.610 --rc genhtml_legend=1 00:05:32.610 --rc geninfo_all_blocks=1 00:05:32.610 --rc geninfo_unexecuted_blocks=1 00:05:32.610 00:05:32.610 ' 00:05:32.610 14:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:32.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.610 --rc genhtml_branch_coverage=1 00:05:32.610 --rc genhtml_function_coverage=1 00:05:32.610 --rc genhtml_legend=1 00:05:32.610 --rc geninfo_all_blocks=1 00:05:32.610 --rc geninfo_unexecuted_blocks=1 00:05:32.610 00:05:32.610 ' 00:05:32.610 14:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:32.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.610 --rc genhtml_branch_coverage=1 00:05:32.610 --rc genhtml_function_coverage=1 00:05:32.610 --rc genhtml_legend=1 00:05:32.610 --rc geninfo_all_blocks=1 00:05:32.610 --rc geninfo_unexecuted_blocks=1 00:05:32.610 00:05:32.610 ' 00:05:32.610 14:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:32.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.610 --rc genhtml_branch_coverage=1 00:05:32.610 --rc genhtml_function_coverage=1 00:05:32.610 --rc genhtml_legend=1 00:05:32.610 --rc geninfo_all_blocks=1 00:05:32.610 --rc geninfo_unexecuted_blocks=1 00:05:32.610 00:05:32.610 ' 00:05:32.610 14:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:32.610 14:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:05:32.610 14:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:32.610 14:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:32.610 14:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:32.610 14:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:32.610 14:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:32.610 14:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:32.610 14:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:32.610 14:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:32.610 14:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:32.610 14:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:32.610 14:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:32.610 14:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:32.610 14:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:32.610 14:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:32.610 14:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:32.610 14:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:32.610 14:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:32.610 14:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:05:32.610 14:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:32.610 14:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:32.610 14:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:32.610 14:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:32.610 14:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:32.610 14:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:32.610 14:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:05:32.610 14:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:32.610 14:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:05:32.610 14:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:32.610 14:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:32.610 14:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:32.610 14:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:32.610 14:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:32.610 14:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:32.610 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:32.610 14:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:32.610 14:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:32.610 14:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:32.610 14:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:32.610 14:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:05:32.610 14:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:32.610 14:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:32.610 14:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:32.610 14:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:32.610 14:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:32.610 14:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:32.610 14:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:32.610 14:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:32.610 14:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:32.610 14:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:32.610 14:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:05:32.610 14:42:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:34.519 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:34.519 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:05:34.519 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:34.519 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:34.519 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:34.519 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:34.519 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:34.519 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:05:34.519 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:34.519 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:05:34.519 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:05:34.519 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:05:34.519 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:05:34.519 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:05:34.519 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:05:34.519 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:34.519 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:34.519 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:34.519 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:34.519 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:34.519 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:34.519 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:34.519 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:34.519 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:34.519 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:34.519 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:34.519 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:34.519 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:34.519 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:34.519 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:34.519 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:34.519 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:34.519 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:34.519 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:34.520 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:05:34.520 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:05:34.520 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:34.520 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:34.520 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:34.520 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:34.520 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:34.520 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:34.520 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:05:34.520 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:05:34.520 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:34.520 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:34.520 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:34.520 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:34.520 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:34.520 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:34.520 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:34.520 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:34.520 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:34.520 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:34.520 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:34.520 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:34.520 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:34.520 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:34.520 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:34.520 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:05:34.520 Found net devices under 0000:0a:00.0: cvl_0_0 00:05:34.520 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:34.520 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:34.520 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:34.520 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:34.520 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:34.520 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:34.520 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:34.520 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:34.520 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:05:34.520 Found net devices under 0000:0a:00.1: cvl_0_1 00:05:34.520 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:34.520 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:34.520 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:05:34.520 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:34.520 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:34.520 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:34.520 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:34.520 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:34.520 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:34.520 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:34.520 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:34.520 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:34.520 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:34.520 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:34.520 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:34.520 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:34.520 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:34.520 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:34.520 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:34.520 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:34.520 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:34.520 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:34.520 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:34.520 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:34.520 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:34.779 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:34.779 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:34.779 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:34.779 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:34.779 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:34.779 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.233 ms 00:05:34.779 00:05:34.779 --- 10.0.0.2 ping statistics --- 00:05:34.779 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:34.779 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:05:34.779 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:34.779 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:34.779 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:05:34.779 00:05:34.779 --- 10.0.0.1 ping statistics --- 00:05:34.779 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:34.779 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:05:34.779 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:34.779 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:05:34.779 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:34.779 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:34.779 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:34.779 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:34.779 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:34.779 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:34.779 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:34.779 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:05:34.779 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:34.779 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:34.779 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:34.779 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=559025 00:05:34.779 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:34.779 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 559025 00:05:34.779 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 559025 ']' 00:05:34.779 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.779 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:34.779 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.779 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:34.779 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:34.779 [2024-12-11 14:42:17.398292] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:05:34.779 [2024-12-11 14:42:17.398393] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:34.779 [2024-12-11 14:42:17.469204] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:34.779 [2024-12-11 14:42:17.522409] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:34.779 [2024-12-11 14:42:17.522467] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:34.779 [2024-12-11 14:42:17.522491] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:34.779 [2024-12-11 14:42:17.522501] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:34.779 [2024-12-11 14:42:17.522510] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:34.779 [2024-12-11 14:42:17.524068] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:34.779 [2024-12-11 14:42:17.524131] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:05:34.779 [2024-12-11 14:42:17.524135] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:35.038 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:35.038 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:05:35.038 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:35.038 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:35.038 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:35.038 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:35.038 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:05:35.038 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:05:35.295 [2024-12-11 14:42:17.916215] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:35.295 14:42:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:35.552 14:42:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:35.821 [2024-12-11 14:42:18.439056] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:35.821 14:42:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:36.111 14:42:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:05:36.368 Malloc0 00:05:36.368 14:42:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:36.625 Delay0 00:05:36.625 14:42:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:36.883 14:42:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:05:37.140 NULL1 00:05:37.140 14:42:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:05:37.398 14:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=559449 00:05:37.398 14:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:05:37.398 14:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 559449 00:05:37.398 14:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:37.656 14:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:37.913 14:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:05:37.913 14:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:05:38.171 true 00:05:38.171 14:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 559449 00:05:38.171 14:42:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:38.429 14:42:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:38.686 14:42:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:05:38.687 14:42:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:05:38.944 true 00:05:39.202 14:42:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 559449 00:05:39.202 14:42:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:39.202 14:42:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:39.767 14:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:05:39.767 14:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:05:39.767 true 00:05:39.767 14:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 559449 00:05:39.767 14:42:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:40.700 Read completed with error (sct=0, sc=11) 00:05:40.700 14:42:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:40.957 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:41.215 14:42:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:05:41.215 14:42:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:05:41.215 true 00:05:41.472 14:42:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 559449 00:05:41.472 14:42:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:41.729 14:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:41.987 14:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:05:41.987 14:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:05:42.245 true 00:05:42.245 14:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 559449 00:05:42.245 14:42:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:43.180 14:42:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:43.180 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:43.180 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:43.438 14:42:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:05:43.438 14:42:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:05:43.695 true 00:05:43.695 14:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 559449 00:05:43.695 14:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:43.952 14:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:44.210 14:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:05:44.210 14:42:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:05:44.467 true 00:05:44.467 14:42:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 559449 00:05:44.468 14:42:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:45.401 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:45.401 14:42:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:45.401 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:45.401 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:45.401 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:45.401 14:42:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:05:45.401 14:42:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:05:45.658 true 00:05:45.659 14:42:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 559449 00:05:45.659 14:42:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:45.916 14:42:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:46.481 14:42:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:05:46.481 14:42:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:05:46.481 true 00:05:46.481 14:42:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 559449 00:05:46.481 14:42:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:47.415 14:42:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:47.672 14:42:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:05:47.672 14:42:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:05:47.930 true 00:05:47.930 14:42:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 559449 00:05:47.930 14:42:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:48.188 14:42:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:48.754 14:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:05:48.754 14:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:05:48.754 true 00:05:48.754 14:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 559449 00:05:48.754 14:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:49.012 14:42:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:49.269 14:42:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:05:49.269 14:42:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:05:49.527 true 00:05:49.527 14:42:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 559449 00:05:49.527 14:42:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:50.460 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:50.717 14:42:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:50.717 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:50.975 14:42:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:05:50.975 14:42:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:05:51.232 true 00:05:51.232 14:42:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 559449 00:05:51.232 14:42:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:51.490 14:42:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:51.748 14:42:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:05:51.748 14:42:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:05:52.006 true 00:05:52.006 14:42:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 559449 00:05:52.006 14:42:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:52.937 14:42:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:52.938 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:52.938 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:53.195 14:42:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:05:53.195 14:42:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:05:53.453 true 00:05:53.453 14:42:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 559449 00:05:53.453 14:42:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:53.710 14:42:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:53.968 14:42:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:05:53.968 14:42:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:05:54.226 true 00:05:54.226 14:42:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 559449 00:05:54.226 14:42:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:55.159 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:55.159 14:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:55.159 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:55.159 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:55.416 14:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:05:55.416 14:42:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:05:55.701 true 00:05:55.701 14:42:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 559449 00:05:55.701 14:42:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:55.982 14:42:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:55.982 14:42:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:05:55.982 14:42:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:05:56.548 true 00:05:56.548 14:42:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 559449 00:05:56.548 14:42:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:57.113 14:42:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:57.679 14:42:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:05:57.679 14:42:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:05:57.679 true 00:05:57.679 14:42:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 559449 00:05:57.679 14:42:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:57.937 14:42:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:58.502 14:42:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:05:58.502 14:42:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:05:58.502 true 00:05:58.502 14:42:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 559449 00:05:58.502 14:42:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:58.760 14:42:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:59.017 14:42:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:05:59.017 14:42:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:05:59.275 true 00:05:59.533 14:42:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 559449 00:05:59.533 14:42:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:00.467 14:42:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:00.725 14:42:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:06:00.725 14:42:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:06:00.982 true 00:06:00.982 14:42:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 559449 00:06:00.982 14:42:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:01.240 14:42:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:01.497 14:42:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:06:01.497 14:42:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:06:01.755 true 00:06:01.755 14:42:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 559449 00:06:01.755 14:42:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:02.012 14:42:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:02.270 14:42:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:06:02.270 14:42:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:06:02.526 true 00:06:02.526 14:42:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 559449 00:06:02.526 14:42:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:03.457 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:03.457 14:42:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:03.457 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:03.457 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:03.457 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:03.714 14:42:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:06:03.714 14:42:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:06:03.972 true 00:06:03.972 14:42:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 559449 00:06:03.972 14:42:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:04.229 14:42:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:04.487 14:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:06:04.487 14:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:06:04.744 true 00:06:04.744 14:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 559449 00:06:04.744 14:42:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:05.677 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:05.677 14:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:05.935 14:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:06:05.935 14:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:06:06.193 true 00:06:06.193 14:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 559449 00:06:06.193 14:42:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:06.450 14:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:06.707 14:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:06:06.707 14:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:06:06.964 true 00:06:06.964 14:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 559449 00:06:06.964 14:42:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:07.894 Initializing NVMe Controllers 00:06:07.894 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:07.894 Controller IO queue size 128, less than required. 00:06:07.894 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:07.894 Controller IO queue size 128, less than required. 00:06:07.894 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:07.894 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:07.894 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:06:07.894 Initialization complete. Launching workers. 00:06:07.894 ======================================================== 00:06:07.894 Latency(us) 00:06:07.894 Device Information : IOPS MiB/s Average min max 00:06:07.894 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 708.93 0.35 81469.79 2258.31 1051193.50 00:06:07.894 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 9255.47 4.52 13787.89 2920.02 451312.93 00:06:07.894 ======================================================== 00:06:07.894 Total : 9964.40 4.87 18603.22 2258.31 1051193.50 00:06:07.894 00:06:07.894 14:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:08.151 14:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:06:08.151 14:42:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:06:08.409 true 00:06:08.409 14:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 559449 00:06:08.409 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (559449) - No such process 00:06:08.409 14:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 559449 00:06:08.409 14:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:08.666 14:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:08.924 14:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:06:08.924 14:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:06:08.924 14:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:06:08.924 14:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:08.924 14:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:06:09.181 null0 00:06:09.181 14:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:09.181 14:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:09.181 14:42:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:06:09.439 null1 00:06:09.439 14:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:09.439 14:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:09.439 14:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:06:09.696 null2 00:06:09.696 14:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:09.696 14:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:09.696 14:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:06:09.953 null3 00:06:09.953 14:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:09.953 14:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:09.953 14:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:06:10.211 null4 00:06:10.211 14:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:10.211 14:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:10.211 14:42:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:06:10.776 null5 00:06:10.776 14:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:10.776 14:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:10.776 14:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:06:10.776 null6 00:06:10.776 14:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:10.776 14:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:10.776 14:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:06:11.034 null7 00:06:11.034 14:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:11.034 14:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:11.034 14:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:06:11.034 14:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:11.034 14:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:11.034 14:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:06:11.034 14:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:11.034 14:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:11.034 14:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:06:11.034 14:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:11.034 14:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.034 14:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:11.034 14:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:11.034 14:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:11.034 14:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:06:11.034 14:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:11.034 14:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:06:11.034 14:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:11.034 14:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.034 14:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:11.034 14:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:11.034 14:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:06:11.034 14:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:11.034 14:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:06:11.034 14:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:11.034 14:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:11.034 14:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.034 14:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:11.034 14:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:11.034 14:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:06:11.034 14:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:11.034 14:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:11.034 14:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:06:11.034 14:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:11.034 14:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.034 14:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:11.034 14:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:11.034 14:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:06:11.034 14:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:11.034 14:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:06:11.034 14:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:11.034 14:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:11.034 14:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.034 14:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:11.034 14:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:11.034 14:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:06:11.034 14:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:11.035 14:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:06:11.035 14:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:11.035 14:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:11.035 14:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.035 14:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:11.035 14:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:11.035 14:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:06:11.035 14:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:11.035 14:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:06:11.035 14:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:11.035 14:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:11.035 14:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.035 14:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:11.035 14:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:11.035 14:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:06:11.035 14:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:11.035 14:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:06:11.035 14:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:11.035 14:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:11.035 14:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 563521 563522 563524 563526 563528 563530 563532 563534 00:06:11.035 14:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.035 14:42:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:11.600 14:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:11.600 14:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:11.600 14:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:11.600 14:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:11.600 14:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:11.600 14:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:11.600 14:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:11.600 14:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:11.858 14:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.858 14:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.858 14:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:11.858 14:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.858 14:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.858 14:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:11.858 14:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.858 14:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.858 14:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:11.858 14:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.858 14:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.858 14:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:11.858 14:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.858 14:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.858 14:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:11.858 14:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.858 14:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.858 14:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:11.858 14:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.858 14:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.858 14:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:11.858 14:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.858 14:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.858 14:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:12.117 14:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:12.117 14:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:12.117 14:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:12.117 14:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:12.117 14:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:12.117 14:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:12.117 14:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:12.117 14:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:12.375 14:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.375 14:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.375 14:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:12.375 14:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.375 14:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.375 14:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:12.375 14:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.375 14:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.375 14:42:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:12.375 14:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.375 14:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.375 14:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:12.375 14:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.375 14:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.375 14:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:12.375 14:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.375 14:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.375 14:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:12.375 14:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.375 14:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.375 14:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:12.375 14:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.375 14:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.375 14:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:12.632 14:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:12.633 14:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:12.633 14:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:12.633 14:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:12.633 14:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:12.633 14:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:12.633 14:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:12.633 14:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:12.890 14:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.890 14:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.890 14:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:12.890 14:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.890 14:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.890 14:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:12.890 14:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.890 14:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.890 14:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:12.890 14:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.890 14:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.890 14:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:12.890 14:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.890 14:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.890 14:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:12.890 14:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.890 14:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.890 14:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:12.891 14:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.891 14:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.891 14:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:12.891 14:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.891 14:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.891 14:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:13.149 14:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:13.149 14:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:13.149 14:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:13.149 14:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:13.149 14:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:13.149 14:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:13.149 14:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:13.149 14:42:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:13.407 14:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.407 14:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.407 14:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:13.407 14:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.407 14:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.407 14:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:13.665 14:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.665 14:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.665 14:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:13.665 14:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.665 14:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.665 14:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:13.665 14:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.665 14:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.665 14:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:13.665 14:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.665 14:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.665 14:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:13.665 14:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.665 14:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.665 14:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:13.665 14:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.665 14:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.665 14:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:13.922 14:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:13.922 14:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:13.922 14:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:13.922 14:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:13.922 14:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:13.922 14:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:13.922 14:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:13.922 14:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:14.181 14:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.181 14:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.181 14:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:14.181 14:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.181 14:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.181 14:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:14.181 14:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.181 14:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.181 14:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:14.181 14:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.181 14:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.181 14:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:14.181 14:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.181 14:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.181 14:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:14.181 14:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.181 14:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.181 14:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:14.181 14:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.181 14:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.181 14:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:14.181 14:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.181 14:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.181 14:42:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:14.471 14:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:14.471 14:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:14.471 14:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:14.472 14:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:14.472 14:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:14.472 14:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:14.472 14:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:14.472 14:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:14.730 14:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.730 14:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.730 14:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:14.730 14:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.730 14:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.730 14:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:14.730 14:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.730 14:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.730 14:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:14.730 14:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.730 14:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.730 14:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:14.730 14:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.730 14:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.730 14:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:14.730 14:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.730 14:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.730 14:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:14.730 14:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.730 14:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.730 14:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:14.730 14:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.730 14:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.730 14:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:14.988 14:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:14.988 14:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:14.988 14:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:14.988 14:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:14.988 14:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:14.988 14:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:14.988 14:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:14.988 14:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:15.246 14:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.246 14:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.246 14:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:15.246 14:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.246 14:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.246 14:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:15.246 14:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.246 14:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.246 14:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:15.246 14:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.246 14:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.246 14:42:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:15.246 14:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.246 14:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.246 14:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:15.246 14:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.246 14:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.246 14:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:15.504 14:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.504 14:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.504 14:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:15.504 14:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.504 14:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.504 14:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:15.762 14:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:15.762 14:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:15.762 14:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:15.762 14:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:15.762 14:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:15.762 14:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:15.762 14:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:15.762 14:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:16.021 14:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.021 14:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.021 14:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:16.021 14:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.021 14:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.021 14:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.021 14:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:16.021 14:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.021 14:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:16.021 14:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.021 14:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.021 14:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:16.021 14:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.021 14:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.021 14:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:16.021 14:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.021 14:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.021 14:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:16.021 14:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.021 14:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.021 14:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:16.021 14:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.021 14:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.021 14:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:16.279 14:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:16.279 14:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:16.279 14:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:16.279 14:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:16.279 14:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:16.279 14:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:16.279 14:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:16.279 14:42:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:16.537 14:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.537 14:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.537 14:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:16.537 14:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.537 14:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.537 14:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:16.537 14:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.537 14:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.537 14:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:16.537 14:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.537 14:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.537 14:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:16.537 14:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.537 14:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.537 14:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:16.537 14:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.537 14:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.537 14:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:16.537 14:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.537 14:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.537 14:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:16.537 14:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.537 14:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.537 14:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:16.795 14:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:16.795 14:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:16.795 14:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:16.795 14:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:16.795 14:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:16.795 14:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:16.795 14:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:16.795 14:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:17.054 14:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.054 14:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.054 14:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.054 14:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.054 14:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.054 14:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.054 14:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.054 14:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.054 14:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.054 14:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.312 14:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.312 14:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.312 14:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.312 14:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.312 14:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.312 14:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.312 14:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:06:17.312 14:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:06:17.312 14:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:17.312 14:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:06:17.312 14:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:17.312 14:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:06:17.312 14:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:17.313 14:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:17.313 rmmod nvme_tcp 00:06:17.313 rmmod nvme_fabrics 00:06:17.313 rmmod nvme_keyring 00:06:17.313 14:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:17.313 14:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:06:17.313 14:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:06:17.313 14:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 559025 ']' 00:06:17.313 14:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 559025 00:06:17.313 14:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 559025 ']' 00:06:17.313 14:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 559025 00:06:17.313 14:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:06:17.313 14:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:17.313 14:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 559025 00:06:17.313 14:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:17.313 14:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:17.313 14:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 559025' 00:06:17.313 killing process with pid 559025 00:06:17.313 14:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 559025 00:06:17.313 14:42:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 559025 00:06:17.573 14:43:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:17.573 14:43:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:17.573 14:43:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:17.573 14:43:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:06:17.573 14:43:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:06:17.573 14:43:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:17.573 14:43:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:06:17.573 14:43:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:17.573 14:43:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:17.573 14:43:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:17.573 14:43:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:17.573 14:43:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:19.479 14:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:19.479 00:06:19.479 real 0m47.437s 00:06:19.479 user 3m40.392s 00:06:19.479 sys 0m16.228s 00:06:19.479 14:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:19.479 14:43:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:19.479 ************************************ 00:06:19.479 END TEST nvmf_ns_hotplug_stress 00:06:19.479 ************************************ 00:06:19.738 14:43:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:19.738 14:43:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:19.738 14:43:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:19.738 14:43:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:19.738 ************************************ 00:06:19.738 START TEST nvmf_delete_subsystem 00:06:19.738 ************************************ 00:06:19.738 14:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:19.738 * Looking for test storage... 00:06:19.738 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:19.738 14:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:19.738 14:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:06:19.738 14:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:19.738 14:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:19.738 14:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:19.738 14:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:19.738 14:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:19.738 14:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:06:19.738 14:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:06:19.738 14:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:06:19.738 14:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:06:19.738 14:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:06:19.738 14:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:06:19.738 14:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:06:19.738 14:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:19.738 14:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:06:19.738 14:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:06:19.738 14:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:19.738 14:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:19.738 14:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:06:19.738 14:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:06:19.738 14:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:19.738 14:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:06:19.738 14:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:06:19.738 14:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:06:19.738 14:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:06:19.738 14:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:19.738 14:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:06:19.739 14:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:06:19.739 14:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:19.739 14:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:19.739 14:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:06:19.739 14:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:19.739 14:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:19.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.739 --rc genhtml_branch_coverage=1 00:06:19.739 --rc genhtml_function_coverage=1 00:06:19.739 --rc genhtml_legend=1 00:06:19.739 --rc geninfo_all_blocks=1 00:06:19.739 --rc geninfo_unexecuted_blocks=1 00:06:19.739 00:06:19.739 ' 00:06:19.739 14:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:19.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.739 --rc genhtml_branch_coverage=1 00:06:19.739 --rc genhtml_function_coverage=1 00:06:19.739 --rc genhtml_legend=1 00:06:19.739 --rc geninfo_all_blocks=1 00:06:19.739 --rc geninfo_unexecuted_blocks=1 00:06:19.739 00:06:19.739 ' 00:06:19.739 14:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:19.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.739 --rc genhtml_branch_coverage=1 00:06:19.739 --rc genhtml_function_coverage=1 00:06:19.739 --rc genhtml_legend=1 00:06:19.739 --rc geninfo_all_blocks=1 00:06:19.739 --rc geninfo_unexecuted_blocks=1 00:06:19.739 00:06:19.739 ' 00:06:19.739 14:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:19.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.739 --rc genhtml_branch_coverage=1 00:06:19.739 --rc genhtml_function_coverage=1 00:06:19.739 --rc genhtml_legend=1 00:06:19.739 --rc geninfo_all_blocks=1 00:06:19.739 --rc geninfo_unexecuted_blocks=1 00:06:19.739 00:06:19.739 ' 00:06:19.739 14:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:19.739 14:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:06:19.739 14:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:19.739 14:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:19.739 14:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:19.739 14:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:19.739 14:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:19.739 14:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:19.739 14:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:19.739 14:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:19.739 14:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:19.739 14:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:19.739 14:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:19.739 14:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:19.739 14:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:19.739 14:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:19.739 14:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:19.739 14:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:19.739 14:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:19.739 14:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:06:19.739 14:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:19.739 14:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:19.739 14:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:19.739 14:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.739 14:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.739 14:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.739 14:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:06:19.739 14:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.739 14:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:06:19.739 14:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:19.739 14:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:19.739 14:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:19.739 14:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:19.739 14:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:19.739 14:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:19.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:19.739 14:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:19.739 14:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:19.739 14:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:19.739 14:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:06:19.739 14:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:19.739 14:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:19.739 14:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:19.739 14:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:19.739 14:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:19.739 14:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:19.739 14:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:19.739 14:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:19.739 14:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:19.739 14:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:19.739 14:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:06:19.739 14:43:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:22.275 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:22.276 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:06:22.276 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:22.276 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:22.276 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:22.276 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:22.276 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:22.276 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:06:22.276 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:22.276 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:06:22.276 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:06:22.276 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:06:22.276 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:06:22.276 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:06:22.276 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:06:22.276 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:22.276 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:22.276 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:22.276 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:22.276 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:22.276 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:22.276 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:22.276 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:22.276 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:22.276 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:22.276 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:22.276 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:22.276 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:22.276 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:22.276 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:22.276 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:22.276 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:22.276 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:22.276 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:22.276 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:22.276 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:22.276 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:22.276 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:22.276 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:22.276 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:22.276 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:22.276 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:22.276 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:22.276 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:22.276 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:22.276 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:22.276 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:22.276 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:22.276 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:22.276 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:22.276 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:22.276 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:22.276 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:22.276 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:22.276 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:22.276 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:22.276 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:22.276 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:22.276 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:22.276 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:22.276 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:22.276 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:22.276 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:22.276 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:22.276 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:22.276 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:22.276 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:22.276 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:22.276 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:22.276 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:22.276 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:22.276 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:22.276 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:22.276 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:06:22.276 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:22.276 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:22.276 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:22.276 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:22.276 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:22.276 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:22.276 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:22.276 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:22.276 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:22.276 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:22.276 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:22.276 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:22.276 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:22.276 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:22.276 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:22.276 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:22.276 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:22.276 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:22.276 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:22.276 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:22.276 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:22.276 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:22.276 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:22.276 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:22.276 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:22.276 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:22.276 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:22.276 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.293 ms 00:06:22.276 00:06:22.276 --- 10.0.0.2 ping statistics --- 00:06:22.276 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:22.276 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:06:22.276 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:22.276 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:22.276 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:06:22.276 00:06:22.276 --- 10.0.0.1 ping statistics --- 00:06:22.276 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:22.276 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:06:22.276 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:22.277 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:06:22.277 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:22.277 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:22.277 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:22.277 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:22.277 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:22.277 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:22.277 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:22.277 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:06:22.277 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:22.277 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:22.277 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:22.277 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=566427 00:06:22.277 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:06:22.277 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 566427 00:06:22.277 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 566427 ']' 00:06:22.277 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.277 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:22.277 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.277 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:22.277 14:43:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:22.277 [2024-12-11 14:43:04.765028] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:06:22.277 [2024-12-11 14:43:04.765118] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:22.277 [2024-12-11 14:43:04.838955] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:22.277 [2024-12-11 14:43:04.897358] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:22.277 [2024-12-11 14:43:04.897420] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:22.277 [2024-12-11 14:43:04.897433] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:22.277 [2024-12-11 14:43:04.897444] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:22.277 [2024-12-11 14:43:04.897455] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:22.277 [2024-12-11 14:43:04.898974] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:22.277 [2024-12-11 14:43:04.898980] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.277 14:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:22.277 14:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:06:22.277 14:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:22.277 14:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:22.277 14:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:22.277 14:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:22.277 14:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:22.277 14:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.534 14:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:22.534 [2024-12-11 14:43:05.050008] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:22.534 14:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.534 14:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:22.534 14:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.534 14:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:22.534 14:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.534 14:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:22.534 14:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.534 14:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:22.534 [2024-12-11 14:43:05.066189] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:22.534 14:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.534 14:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:06:22.534 14:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.534 14:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:22.534 NULL1 00:06:22.534 14:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.534 14:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:22.534 14:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.534 14:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:22.534 Delay0 00:06:22.534 14:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.534 14:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:22.534 14:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.534 14:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:22.534 14:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.534 14:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=566452 00:06:22.534 14:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:22.534 14:43:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:06:22.534 [2024-12-11 14:43:05.151044] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:24.431 14:43:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:24.431 14:43:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.431 14:43:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:24.689 Write completed with error (sct=0, sc=8) 00:06:24.689 Read completed with error (sct=0, sc=8) 00:06:24.689 Write completed with error (sct=0, sc=8) 00:06:24.689 Read completed with error (sct=0, sc=8) 00:06:24.689 starting I/O failed: -6 00:06:24.689 Read completed with error (sct=0, sc=8) 00:06:24.689 Read completed with error (sct=0, sc=8) 00:06:24.689 Read completed with error (sct=0, sc=8) 00:06:24.689 Write completed with error (sct=0, sc=8) 00:06:24.689 starting I/O failed: -6 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Write completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 starting I/O failed: -6 00:06:24.690 Write completed with error (sct=0, sc=8) 00:06:24.690 Write completed with error (sct=0, sc=8) 00:06:24.690 Write completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 starting I/O failed: -6 00:06:24.690 Write completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Write completed with error (sct=0, sc=8) 00:06:24.690 starting I/O failed: -6 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Write completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 starting I/O failed: -6 00:06:24.690 Write completed with error (sct=0, sc=8) 00:06:24.690 Write completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 starting I/O failed: -6 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 starting I/O failed: -6 00:06:24.690 Write completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 starting I/O failed: -6 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 starting I/O failed: -6 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 [2024-12-11 14:43:07.272964] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f7680 is same with the state(6) to be set 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 starting I/O failed: -6 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Write completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 starting I/O failed: -6 00:06:24.690 Write completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Write completed with error (sct=0, sc=8) 00:06:24.690 Write completed with error (sct=0, sc=8) 00:06:24.690 starting I/O failed: -6 00:06:24.690 Write completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 starting I/O failed: -6 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Write completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 starting I/O failed: -6 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Write completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 starting I/O failed: -6 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 starting I/O failed: -6 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Write completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 starting I/O failed: -6 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 starting I/O failed: -6 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 [2024-12-11 14:43:07.273495] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7840000c80 is same with the state(6) to be set 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Write completed with error (sct=0, sc=8) 00:06:24.690 Write completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Write completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Write completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Write completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Write completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Write completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Write completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Write completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Write completed with error (sct=0, sc=8) 00:06:24.690 Write completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Write completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Write completed with error (sct=0, sc=8) 00:06:24.690 Write completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Write completed with error (sct=0, sc=8) 00:06:24.690 Write completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Write completed with error (sct=0, sc=8) 00:06:24.690 Write completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:24.690 Read completed with error (sct=0, sc=8) 00:06:25.623 [2024-12-11 14:43:08.246072] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f89b0 is same with the state(6) to be set 00:06:25.623 Read completed with error (sct=0, sc=8) 00:06:25.623 Write completed with error (sct=0, sc=8) 00:06:25.623 Read completed with error (sct=0, sc=8) 00:06:25.623 Write completed with error (sct=0, sc=8) 00:06:25.623 Write completed with error (sct=0, sc=8) 00:06:25.623 Read completed with error (sct=0, sc=8) 00:06:25.623 Write completed with error (sct=0, sc=8) 00:06:25.623 Read completed with error (sct=0, sc=8) 00:06:25.623 Read completed with error (sct=0, sc=8) 00:06:25.623 Read completed with error (sct=0, sc=8) 00:06:25.623 Write completed with error (sct=0, sc=8) 00:06:25.623 Read completed with error (sct=0, sc=8) 00:06:25.623 Read completed with error (sct=0, sc=8) 00:06:25.623 Write completed with error (sct=0, sc=8) 00:06:25.623 Read completed with error (sct=0, sc=8) 00:06:25.623 Read completed with error (sct=0, sc=8) 00:06:25.623 Read completed with error (sct=0, sc=8) 00:06:25.623 [2024-12-11 14:43:08.272890] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f784000d390 is same with the state(6) to be set 00:06:25.623 Read completed with error (sct=0, sc=8) 00:06:25.623 Read completed with error (sct=0, sc=8) 00:06:25.623 Read completed with error (sct=0, sc=8) 00:06:25.623 Write completed with error (sct=0, sc=8) 00:06:25.623 Write completed with error (sct=0, sc=8) 00:06:25.623 Read completed with error (sct=0, sc=8) 00:06:25.623 Read completed with error (sct=0, sc=8) 00:06:25.623 Write completed with error (sct=0, sc=8) 00:06:25.623 Write completed with error (sct=0, sc=8) 00:06:25.623 Read completed with error (sct=0, sc=8) 00:06:25.623 Read completed with error (sct=0, sc=8) 00:06:25.623 Read completed with error (sct=0, sc=8) 00:06:25.623 Read completed with error (sct=0, sc=8) 00:06:25.623 Write completed with error (sct=0, sc=8) 00:06:25.623 Read completed with error (sct=0, sc=8) 00:06:25.623 Write completed with error (sct=0, sc=8) 00:06:25.623 Read completed with error (sct=0, sc=8) 00:06:25.623 Write completed with error (sct=0, sc=8) 00:06:25.623 Read completed with error (sct=0, sc=8) 00:06:25.623 Read completed with error (sct=0, sc=8) 00:06:25.623 Read completed with error (sct=0, sc=8) 00:06:25.623 Read completed with error (sct=0, sc=8) 00:06:25.623 Read completed with error (sct=0, sc=8) 00:06:25.623 Write completed with error (sct=0, sc=8) 00:06:25.623 Read completed with error (sct=0, sc=8) 00:06:25.623 Read completed with error (sct=0, sc=8) 00:06:25.623 Write completed with error (sct=0, sc=8) 00:06:25.623 Write completed with error (sct=0, sc=8) 00:06:25.623 [2024-12-11 14:43:08.273300] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f74a0 is same with the state(6) to be set 00:06:25.623 Write completed with error (sct=0, sc=8) 00:06:25.623 Read completed with error (sct=0, sc=8) 00:06:25.623 Read completed with error (sct=0, sc=8) 00:06:25.623 Write completed with error (sct=0, sc=8) 00:06:25.623 Write completed with error (sct=0, sc=8) 00:06:25.623 Write completed with error (sct=0, sc=8) 00:06:25.623 Read completed with error (sct=0, sc=8) 00:06:25.623 Write completed with error (sct=0, sc=8) 00:06:25.623 Read completed with error (sct=0, sc=8) 00:06:25.623 Read completed with error (sct=0, sc=8) 00:06:25.623 Write completed with error (sct=0, sc=8) 00:06:25.623 Read completed with error (sct=0, sc=8) 00:06:25.623 Write completed with error (sct=0, sc=8) 00:06:25.623 Read completed with error (sct=0, sc=8) 00:06:25.623 Write completed with error (sct=0, sc=8) 00:06:25.623 Read completed with error (sct=0, sc=8) 00:06:25.623 Read completed with error (sct=0, sc=8) 00:06:25.623 Read completed with error (sct=0, sc=8) 00:06:25.623 Read completed with error (sct=0, sc=8) 00:06:25.623 Write completed with error (sct=0, sc=8) 00:06:25.623 Write completed with error (sct=0, sc=8) 00:06:25.623 [2024-12-11 14:43:08.276945] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f7860 is same with the state(6) to be set 00:06:25.623 Write completed with error (sct=0, sc=8) 00:06:25.623 Write completed with error (sct=0, sc=8) 00:06:25.623 Read completed with error (sct=0, sc=8) 00:06:25.623 Read completed with error (sct=0, sc=8) 00:06:25.623 Read completed with error (sct=0, sc=8) 00:06:25.623 Read completed with error (sct=0, sc=8) 00:06:25.623 Read completed with error (sct=0, sc=8) 00:06:25.623 Read completed with error (sct=0, sc=8) 00:06:25.623 Read completed with error (sct=0, sc=8) 00:06:25.623 Read completed with error (sct=0, sc=8) 00:06:25.623 Read completed with error (sct=0, sc=8) 00:06:25.623 Read completed with error (sct=0, sc=8) 00:06:25.623 Write completed with error (sct=0, sc=8) 00:06:25.623 Read completed with error (sct=0, sc=8) 00:06:25.623 Write completed with error (sct=0, sc=8) 00:06:25.623 Read completed with error (sct=0, sc=8) 00:06:25.623 Read completed with error (sct=0, sc=8) 00:06:25.623 Read completed with error (sct=0, sc=8) 00:06:25.623 Write completed with error (sct=0, sc=8) 00:06:25.623 Read completed with error (sct=0, sc=8) 00:06:25.623 Read completed with error (sct=0, sc=8) 00:06:25.623 Read completed with error (sct=0, sc=8) 00:06:25.623 [2024-12-11 14:43:08.277112] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f72c0 is same with the state(6) to be set 00:06:25.623 Initializing NVMe Controllers 00:06:25.623 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:25.623 Controller IO queue size 128, less than required. 00:06:25.623 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:25.623 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:25.623 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:25.623 Initialization complete. Launching workers. 00:06:25.623 ======================================================== 00:06:25.623 Latency(us) 00:06:25.623 Device Information : IOPS MiB/s Average min max 00:06:25.623 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 164.80 0.08 990672.89 508.05 2001163.55 00:06:25.623 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 147.93 0.07 922462.22 311.58 1998054.59 00:06:25.623 ======================================================== 00:06:25.623 Total : 312.73 0.15 958408.16 311.58 2001163.55 00:06:25.623 00:06:25.623 [2024-12-11 14:43:08.277933] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f89b0 (9): Bad file descriptor 00:06:25.623 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:06:25.623 14:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:25.623 14:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:06:25.623 14:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 566452 00:06:25.623 14:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:26.189 14:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:26.189 14:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 566452 00:06:26.189 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (566452) - No such process 00:06:26.189 14:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 566452 00:06:26.189 14:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:06:26.189 14:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 566452 00:06:26.189 14:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:06:26.189 14:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:26.189 14:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:06:26.189 14:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:26.189 14:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 566452 00:06:26.189 14:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:06:26.189 14:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:26.189 14:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:26.189 14:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:26.189 14:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:26.189 14:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:26.189 14:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:26.189 14:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:26.189 14:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:26.189 14:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:26.189 14:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:26.189 [2024-12-11 14:43:08.800957] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:26.189 14:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:26.189 14:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:26.189 14:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:26.189 14:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:26.189 14:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:26.189 14:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=566862 00:06:26.189 14:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:06:26.189 14:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:26.189 14:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 566862 00:06:26.189 14:43:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:26.189 [2024-12-11 14:43:08.873373] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:26.754 14:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:26.754 14:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 566862 00:06:26.754 14:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:27.319 14:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:27.319 14:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 566862 00:06:27.319 14:43:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:27.576 14:43:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:27.576 14:43:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 566862 00:06:27.576 14:43:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:28.141 14:43:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:28.141 14:43:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 566862 00:06:28.141 14:43:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:28.706 14:43:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:28.706 14:43:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 566862 00:06:28.706 14:43:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:29.271 14:43:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:29.271 14:43:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 566862 00:06:29.271 14:43:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:29.528 Initializing NVMe Controllers 00:06:29.528 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:29.528 Controller IO queue size 128, less than required. 00:06:29.528 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:29.528 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:29.528 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:29.528 Initialization complete. Launching workers. 00:06:29.528 ======================================================== 00:06:29.528 Latency(us) 00:06:29.528 Device Information : IOPS MiB/s Average min max 00:06:29.528 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004098.16 1000168.32 1013684.28 00:06:29.528 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004799.11 1000272.24 1042805.48 00:06:29.528 ======================================================== 00:06:29.528 Total : 256.00 0.12 1004448.63 1000168.32 1042805.48 00:06:29.528 00:06:29.786 14:43:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:29.786 14:43:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 566862 00:06:29.786 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (566862) - No such process 00:06:29.786 14:43:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 566862 00:06:29.786 14:43:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:29.786 14:43:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:06:29.786 14:43:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:29.786 14:43:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:06:29.786 14:43:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:29.786 14:43:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:06:29.786 14:43:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:29.786 14:43:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:29.786 rmmod nvme_tcp 00:06:29.786 rmmod nvme_fabrics 00:06:29.786 rmmod nvme_keyring 00:06:29.786 14:43:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:29.786 14:43:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:06:29.786 14:43:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:06:29.786 14:43:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 566427 ']' 00:06:29.786 14:43:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 566427 00:06:29.786 14:43:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 566427 ']' 00:06:29.786 14:43:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 566427 00:06:29.786 14:43:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:06:29.786 14:43:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:29.786 14:43:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 566427 00:06:29.786 14:43:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:29.786 14:43:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:29.786 14:43:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 566427' 00:06:29.786 killing process with pid 566427 00:06:29.786 14:43:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 566427 00:06:29.786 14:43:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 566427 00:06:30.045 14:43:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:30.046 14:43:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:30.046 14:43:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:30.046 14:43:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:06:30.046 14:43:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:06:30.046 14:43:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:30.046 14:43:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:06:30.046 14:43:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:30.046 14:43:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:30.046 14:43:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:30.046 14:43:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:30.046 14:43:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:31.953 14:43:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:31.953 00:06:31.953 real 0m12.383s 00:06:31.953 user 0m27.932s 00:06:31.953 sys 0m3.005s 00:06:31.953 14:43:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:31.953 14:43:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:31.953 ************************************ 00:06:31.953 END TEST nvmf_delete_subsystem 00:06:31.953 ************************************ 00:06:31.953 14:43:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:31.953 14:43:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:31.953 14:43:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:31.953 14:43:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:32.212 ************************************ 00:06:32.212 START TEST nvmf_host_management 00:06:32.212 ************************************ 00:06:32.212 14:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:32.212 * Looking for test storage... 00:06:32.212 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:32.212 14:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:32.212 14:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:06:32.212 14:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:32.212 14:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:32.212 14:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:32.212 14:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:32.212 14:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:32.212 14:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:06:32.213 14:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:06:32.213 14:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:06:32.213 14:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:06:32.213 14:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:06:32.213 14:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:06:32.213 14:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:06:32.213 14:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:32.213 14:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:06:32.213 14:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:06:32.213 14:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:32.213 14:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:32.213 14:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:06:32.213 14:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:06:32.213 14:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:32.213 14:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:06:32.213 14:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:06:32.213 14:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:06:32.213 14:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:06:32.213 14:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:32.213 14:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:06:32.213 14:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:06:32.213 14:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:32.213 14:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:32.213 14:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:06:32.213 14:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:32.213 14:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:32.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.213 --rc genhtml_branch_coverage=1 00:06:32.213 --rc genhtml_function_coverage=1 00:06:32.213 --rc genhtml_legend=1 00:06:32.213 --rc geninfo_all_blocks=1 00:06:32.213 --rc geninfo_unexecuted_blocks=1 00:06:32.213 00:06:32.213 ' 00:06:32.213 14:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:32.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.213 --rc genhtml_branch_coverage=1 00:06:32.213 --rc genhtml_function_coverage=1 00:06:32.213 --rc genhtml_legend=1 00:06:32.213 --rc geninfo_all_blocks=1 00:06:32.213 --rc geninfo_unexecuted_blocks=1 00:06:32.213 00:06:32.213 ' 00:06:32.213 14:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:32.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.213 --rc genhtml_branch_coverage=1 00:06:32.213 --rc genhtml_function_coverage=1 00:06:32.213 --rc genhtml_legend=1 00:06:32.213 --rc geninfo_all_blocks=1 00:06:32.213 --rc geninfo_unexecuted_blocks=1 00:06:32.213 00:06:32.213 ' 00:06:32.213 14:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:32.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.213 --rc genhtml_branch_coverage=1 00:06:32.213 --rc genhtml_function_coverage=1 00:06:32.213 --rc genhtml_legend=1 00:06:32.213 --rc geninfo_all_blocks=1 00:06:32.213 --rc geninfo_unexecuted_blocks=1 00:06:32.213 00:06:32.213 ' 00:06:32.213 14:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:32.213 14:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:06:32.213 14:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:32.213 14:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:32.213 14:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:32.213 14:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:32.213 14:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:32.213 14:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:32.213 14:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:32.213 14:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:32.213 14:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:32.213 14:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:32.213 14:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:32.213 14:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:32.213 14:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:32.213 14:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:32.213 14:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:32.213 14:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:32.213 14:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:32.213 14:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:06:32.213 14:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:32.213 14:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:32.213 14:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:32.213 14:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.213 14:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.213 14:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.213 14:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:06:32.213 14:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.213 14:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:06:32.213 14:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:32.213 14:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:32.213 14:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:32.213 14:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:32.213 14:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:32.213 14:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:32.213 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:32.213 14:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:32.213 14:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:32.213 14:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:32.213 14:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:32.213 14:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:32.213 14:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:06:32.213 14:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:32.213 14:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:32.213 14:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:32.213 14:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:32.213 14:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:32.213 14:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:32.213 14:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:32.214 14:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:32.214 14:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:32.214 14:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:32.214 14:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:06:32.214 14:43:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:34.923 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:34.923 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:06:34.923 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:34.923 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:34.923 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:34.923 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:34.923 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:34.923 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:06:34.923 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:34.923 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:06:34.923 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:06:34.923 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:06:34.923 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:06:34.923 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:06:34.923 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:06:34.923 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:34.923 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:34.923 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:34.923 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:34.923 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:34.923 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:34.923 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:34.923 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:34.923 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:34.923 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:34.923 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:34.923 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:34.923 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:34.923 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:34.923 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:34.923 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:34.923 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:34.923 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:34.923 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:34.923 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:34.923 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:34.923 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:34.924 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:34.924 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:34.924 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:34.924 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:34.924 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:34.924 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:34.924 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:34.924 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:34.924 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:34.924 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:34.924 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:34.924 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:34.924 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:34.924 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:34.924 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:34.924 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:34.924 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:34.924 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:34.924 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:34.924 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:34.924 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:34.924 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:34.924 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:34.924 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:34.924 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:34.924 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:34.924 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:34.924 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:34.924 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:34.924 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:34.924 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:34.924 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:34.924 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:34.924 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:34.924 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:34.924 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:34.924 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:06:34.924 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:34.924 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:34.924 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:34.924 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:34.924 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:34.924 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:34.924 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:34.924 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:34.924 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:34.924 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:34.924 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:34.924 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:34.924 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:34.924 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:34.924 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:34.924 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:34.924 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:34.924 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:34.924 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:34.924 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:34.924 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:34.924 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:34.924 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:34.924 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:34.924 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:34.924 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:34.924 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:34.924 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:06:34.924 00:06:34.924 --- 10.0.0.2 ping statistics --- 00:06:34.924 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:34.924 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:06:34.924 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:34.924 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:34.924 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.080 ms 00:06:34.924 00:06:34.924 --- 10.0.0.1 ping statistics --- 00:06:34.924 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:34.924 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:06:34.924 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:34.924 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:06:34.924 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:34.924 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:34.924 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:34.924 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:34.924 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:34.924 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:34.924 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:34.924 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:06:34.924 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:06:34.924 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:06:34.924 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:34.924 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:34.924 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:34.924 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=569336 00:06:34.924 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:06:34.924 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 569336 00:06:34.924 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 569336 ']' 00:06:34.924 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.924 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:34.924 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.924 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:34.924 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:34.924 [2024-12-11 14:43:17.354967] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:06:34.924 [2024-12-11 14:43:17.355054] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:34.924 [2024-12-11 14:43:17.431302] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:34.924 [2024-12-11 14:43:17.492698] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:34.924 [2024-12-11 14:43:17.492757] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:34.924 [2024-12-11 14:43:17.492770] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:34.924 [2024-12-11 14:43:17.492781] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:34.924 [2024-12-11 14:43:17.492791] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:34.924 [2024-12-11 14:43:17.494353] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:34.924 [2024-12-11 14:43:17.494437] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:06:34.924 [2024-12-11 14:43:17.494377] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:06:34.924 [2024-12-11 14:43:17.494440] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:34.925 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:34.925 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:34.925 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:34.925 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:34.925 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:34.925 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:34.925 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:34.925 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:34.925 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:34.925 [2024-12-11 14:43:17.653920] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:34.925 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:34.925 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:06:34.925 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:34.925 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:34.925 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:34.925 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:06:34.925 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:06:34.925 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:34.925 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:35.183 Malloc0 00:06:35.183 [2024-12-11 14:43:17.728819] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:35.183 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.183 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:06:35.183 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:35.183 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:35.183 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=569383 00:06:35.183 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 569383 /var/tmp/bdevperf.sock 00:06:35.183 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 569383 ']' 00:06:35.183 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:35.183 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:06:35.183 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:06:35.183 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:35.183 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:35.183 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:35.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:35.184 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:35.184 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:35.184 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:35.184 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:35.184 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:35.184 { 00:06:35.184 "params": { 00:06:35.184 "name": "Nvme$subsystem", 00:06:35.184 "trtype": "$TEST_TRANSPORT", 00:06:35.184 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:35.184 "adrfam": "ipv4", 00:06:35.184 "trsvcid": "$NVMF_PORT", 00:06:35.184 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:35.184 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:35.184 "hdgst": ${hdgst:-false}, 00:06:35.184 "ddgst": ${ddgst:-false} 00:06:35.184 }, 00:06:35.184 "method": "bdev_nvme_attach_controller" 00:06:35.184 } 00:06:35.184 EOF 00:06:35.184 )") 00:06:35.184 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:35.184 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:35.184 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:35.184 14:43:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:35.184 "params": { 00:06:35.184 "name": "Nvme0", 00:06:35.184 "trtype": "tcp", 00:06:35.184 "traddr": "10.0.0.2", 00:06:35.184 "adrfam": "ipv4", 00:06:35.184 "trsvcid": "4420", 00:06:35.184 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:35.184 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:35.184 "hdgst": false, 00:06:35.184 "ddgst": false 00:06:35.184 }, 00:06:35.184 "method": "bdev_nvme_attach_controller" 00:06:35.184 }' 00:06:35.184 [2024-12-11 14:43:17.804796] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:06:35.184 [2024-12-11 14:43:17.804892] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid569383 ] 00:06:35.184 [2024-12-11 14:43:17.875774] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.184 [2024-12-11 14:43:17.935094] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.442 Running I/O for 10 seconds... 00:06:35.442 14:43:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:35.442 14:43:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:35.442 14:43:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:06:35.442 14:43:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.442 14:43:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:35.442 14:43:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.442 14:43:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:35.442 14:43:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:06:35.442 14:43:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:06:35.442 14:43:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:06:35.442 14:43:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:06:35.442 14:43:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:06:35.442 14:43:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:06:35.443 14:43:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:35.443 14:43:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:35.443 14:43:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:35.443 14:43:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.443 14:43:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:35.443 14:43:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.701 14:43:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:06:35.701 14:43:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:06:35.701 14:43:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:06:35.960 14:43:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:06:35.960 14:43:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:35.960 14:43:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:35.960 14:43:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:35.960 14:43:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.960 14:43:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:35.960 14:43:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.960 14:43:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=547 00:06:35.960 14:43:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 547 -ge 100 ']' 00:06:35.960 14:43:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:06:35.960 14:43:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:06:35.960 14:43:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:06:35.960 14:43:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:35.960 14:43:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.960 14:43:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:35.960 [2024-12-11 14:43:18.526958] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:35.960 [2024-12-11 14:43:18.527045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.960 [2024-12-11 14:43:18.527065] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:06:35.960 [2024-12-11 14:43:18.527079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.960 [2024-12-11 14:43:18.527093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:06:35.960 [2024-12-11 14:43:18.527106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.960 [2024-12-11 14:43:18.527126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:06:35.960 [2024-12-11 14:43:18.527139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.960 [2024-12-11 14:43:18.527152] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96670 is same with the state(6) to be set 00:06:35.960 14:43:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.960 14:43:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:35.960 14:43:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.960 14:43:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:35.960 [2024-12-11 14:43:18.533474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.960 [2024-12-11 14:43:18.533501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.960 [2024-12-11 14:43:18.533542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.960 [2024-12-11 14:43:18.533570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.960 [2024-12-11 14:43:18.533600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.960 [2024-12-11 14:43:18.533615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.960 [2024-12-11 14:43:18.533630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.960 [2024-12-11 14:43:18.533644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.960 [2024-12-11 14:43:18.533659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.960 [2024-12-11 14:43:18.533674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.960 [2024-12-11 14:43:18.533689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.960 [2024-12-11 14:43:18.533702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.960 [2024-12-11 14:43:18.533718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.960 [2024-12-11 14:43:18.533732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.960 [2024-12-11 14:43:18.533753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.960 [2024-12-11 14:43:18.533768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.960 [2024-12-11 14:43:18.533783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.960 [2024-12-11 14:43:18.533798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.960 [2024-12-11 14:43:18.533813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.960 [2024-12-11 14:43:18.533827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.960 [2024-12-11 14:43:18.533857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.960 [2024-12-11 14:43:18.533871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.960 [2024-12-11 14:43:18.533885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.961 [2024-12-11 14:43:18.533899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.961 [2024-12-11 14:43:18.533913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.961 [2024-12-11 14:43:18.533937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.961 [2024-12-11 14:43:18.533952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.961 [2024-12-11 14:43:18.533965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.961 [2024-12-11 14:43:18.533980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.961 [2024-12-11 14:43:18.533993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.961 [2024-12-11 14:43:18.534007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.961 [2024-12-11 14:43:18.534020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.961 [2024-12-11 14:43:18.534034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.961 [2024-12-11 14:43:18.534047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.961 [2024-12-11 14:43:18.534062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.961 [2024-12-11 14:43:18.534076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.961 [2024-12-11 14:43:18.534091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.961 [2024-12-11 14:43:18.534105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.961 [2024-12-11 14:43:18.534119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.961 [2024-12-11 14:43:18.534137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.961 [2024-12-11 14:43:18.534153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.961 [2024-12-11 14:43:18.534166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.961 [2024-12-11 14:43:18.534181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.961 [2024-12-11 14:43:18.534195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.961 [2024-12-11 14:43:18.534210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.961 [2024-12-11 14:43:18.534224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.961 [2024-12-11 14:43:18.534239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.961 [2024-12-11 14:43:18.534252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.961 [2024-12-11 14:43:18.534267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.961 [2024-12-11 14:43:18.534280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.961 [2024-12-11 14:43:18.534294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.961 [2024-12-11 14:43:18.534307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.961 [2024-12-11 14:43:18.534322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.961 [2024-12-11 14:43:18.534335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.961 [2024-12-11 14:43:18.534349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.961 [2024-12-11 14:43:18.534363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.961 [2024-12-11 14:43:18.534377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.961 [2024-12-11 14:43:18.534389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.961 [2024-12-11 14:43:18.534404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.961 [2024-12-11 14:43:18.534417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.961 [2024-12-11 14:43:18.534432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.961 [2024-12-11 14:43:18.534444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.961 [2024-12-11 14:43:18.534458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.961 [2024-12-11 14:43:18.534472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.961 [2024-12-11 14:43:18.534490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.961 [2024-12-11 14:43:18.534504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.961 [2024-12-11 14:43:18.534518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.961 [2024-12-11 14:43:18.534555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.961 [2024-12-11 14:43:18.534574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.961 [2024-12-11 14:43:18.534588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.961 [2024-12-11 14:43:18.534610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.961 [2024-12-11 14:43:18.534623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.961 [2024-12-11 14:43:18.534638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.961 [2024-12-11 14:43:18.534652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.961 [2024-12-11 14:43:18.534667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.961 [2024-12-11 14:43:18.534680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.961 [2024-12-11 14:43:18.534695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.961 [2024-12-11 14:43:18.534708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.961 [2024-12-11 14:43:18.534723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.961 [2024-12-11 14:43:18.534737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.961 [2024-12-11 14:43:18.534752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.961 [2024-12-11 14:43:18.534765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.961 [2024-12-11 14:43:18.534780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.961 [2024-12-11 14:43:18.534794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.961 [2024-12-11 14:43:18.534817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.961 [2024-12-11 14:43:18.534831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.961 [2024-12-11 14:43:18.534872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.961 [2024-12-11 14:43:18.534885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.961 [2024-12-11 14:43:18.534899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.961 [2024-12-11 14:43:18.534917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.961 [2024-12-11 14:43:18.534933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.961 [2024-12-11 14:43:18.534946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.961 [2024-12-11 14:43:18.534961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.961 [2024-12-11 14:43:18.534974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.961 [2024-12-11 14:43:18.534989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.961 [2024-12-11 14:43:18.535002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.961 [2024-12-11 14:43:18.535016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.961 [2024-12-11 14:43:18.535030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.961 [2024-12-11 14:43:18.535044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.961 [2024-12-11 14:43:18.535057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.961 [2024-12-11 14:43:18.535071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.961 [2024-12-11 14:43:18.535084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.961 [2024-12-11 14:43:18.535098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.962 [2024-12-11 14:43:18.535112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.962 [2024-12-11 14:43:18.535126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.962 [2024-12-11 14:43:18.535139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.962 [2024-12-11 14:43:18.535153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.962 [2024-12-11 14:43:18.535166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.962 [2024-12-11 14:43:18.535181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.962 [2024-12-11 14:43:18.535194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.962 [2024-12-11 14:43:18.535214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.962 [2024-12-11 14:43:18.535228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.962 [2024-12-11 14:43:18.535243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.962 [2024-12-11 14:43:18.535271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.962 [2024-12-11 14:43:18.535292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.962 [2024-12-11 14:43:18.535306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.962 [2024-12-11 14:43:18.535326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.962 [2024-12-11 14:43:18.535341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.962 [2024-12-11 14:43:18.535355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.962 [2024-12-11 14:43:18.535368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.962 [2024-12-11 14:43:18.535383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.962 [2024-12-11 14:43:18.535397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.962 [2024-12-11 14:43:18.535412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.962 [2024-12-11 14:43:18.535425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.962 [2024-12-11 14:43:18.535439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.962 [2024-12-11 14:43:18.535453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.962 [2024-12-11 14:43:18.535469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.962 [2024-12-11 14:43:18.535483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.962 14:43:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.962 14:43:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:06:35.962 [2024-12-11 14:43:18.536714] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:06:35.962 task offset: 81920 on job bdev=Nvme0n1 fails 00:06:35.962 00:06:35.962 Latency(us) 00:06:35.962 [2024-12-11T13:43:18.735Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:35.962 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:35.962 Job: Nvme0n1 ended in about 0.41 seconds with error 00:06:35.962 Verification LBA range: start 0x0 length 0x400 00:06:35.962 Nvme0n1 : 0.41 1560.71 97.54 156.07 0.00 35992.11 2463.67 39418.69 00:06:35.962 [2024-12-11T13:43:18.735Z] =================================================================================================================== 00:06:35.962 [2024-12-11T13:43:18.735Z] Total : 1560.71 97.54 156.07 0.00 35992.11 2463.67 39418.69 00:06:35.962 [2024-12-11 14:43:18.538611] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:35.962 [2024-12-11 14:43:18.538641] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b96670 (9): Bad file descriptor 00:06:35.962 [2024-12-11 14:43:18.589669] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:06:36.895 14:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 569383 00:06:36.895 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (569383) - No such process 00:06:36.895 14:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:06:36.895 14:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:06:36.895 14:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:06:36.895 14:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:06:36.895 14:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:36.895 14:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:36.895 14:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:36.895 14:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:36.895 { 00:06:36.895 "params": { 00:06:36.895 "name": "Nvme$subsystem", 00:06:36.895 "trtype": "$TEST_TRANSPORT", 00:06:36.895 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:36.895 "adrfam": "ipv4", 00:06:36.895 "trsvcid": "$NVMF_PORT", 00:06:36.895 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:36.895 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:36.895 "hdgst": ${hdgst:-false}, 00:06:36.895 "ddgst": ${ddgst:-false} 00:06:36.895 }, 00:06:36.895 "method": "bdev_nvme_attach_controller" 00:06:36.895 } 00:06:36.895 EOF 00:06:36.895 )") 00:06:36.895 14:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:36.895 14:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:36.895 14:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:36.895 14:43:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:36.895 "params": { 00:06:36.895 "name": "Nvme0", 00:06:36.895 "trtype": "tcp", 00:06:36.895 "traddr": "10.0.0.2", 00:06:36.895 "adrfam": "ipv4", 00:06:36.895 "trsvcid": "4420", 00:06:36.895 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:36.895 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:36.895 "hdgst": false, 00:06:36.895 "ddgst": false 00:06:36.895 }, 00:06:36.895 "method": "bdev_nvme_attach_controller" 00:06:36.895 }' 00:06:36.895 [2024-12-11 14:43:19.588213] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:06:36.895 [2024-12-11 14:43:19.588302] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid569661 ] 00:06:36.895 [2024-12-11 14:43:19.657600] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.153 [2024-12-11 14:43:19.718887] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.411 Running I/O for 1 seconds... 00:06:38.345 1664.00 IOPS, 104.00 MiB/s 00:06:38.345 Latency(us) 00:06:38.345 [2024-12-11T13:43:21.118Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:38.345 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:38.345 Verification LBA range: start 0x0 length 0x400 00:06:38.345 Nvme0n1 : 1.01 1705.70 106.61 0.00 0.00 36899.77 5509.88 33593.27 00:06:38.345 [2024-12-11T13:43:21.118Z] =================================================================================================================== 00:06:38.345 [2024-12-11T13:43:21.118Z] Total : 1705.70 106.61 0.00 0.00 36899.77 5509.88 33593.27 00:06:38.603 14:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:06:38.603 14:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:06:38.603 14:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:06:38.603 14:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:38.603 14:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:06:38.603 14:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:38.603 14:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:06:38.603 14:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:38.603 14:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:06:38.603 14:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:38.603 14:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:38.603 rmmod nvme_tcp 00:06:38.603 rmmod nvme_fabrics 00:06:38.603 rmmod nvme_keyring 00:06:38.603 14:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:38.603 14:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:06:38.603 14:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:06:38.603 14:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 569336 ']' 00:06:38.603 14:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 569336 00:06:38.603 14:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 569336 ']' 00:06:38.603 14:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 569336 00:06:38.603 14:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:06:38.603 14:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:38.603 14:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 569336 00:06:38.861 14:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:38.861 14:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:38.861 14:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 569336' 00:06:38.861 killing process with pid 569336 00:06:38.861 14:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 569336 00:06:38.861 14:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 569336 00:06:38.861 [2024-12-11 14:43:21.612906] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:06:39.121 14:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:39.121 14:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:39.121 14:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:39.121 14:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:06:39.121 14:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:06:39.121 14:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:06:39.121 14:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:39.121 14:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:39.121 14:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:39.121 14:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:39.121 14:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:39.121 14:43:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:41.031 14:43:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:41.031 14:43:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:06:41.031 00:06:41.031 real 0m8.951s 00:06:41.031 user 0m19.720s 00:06:41.031 sys 0m2.806s 00:06:41.031 14:43:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:41.031 14:43:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:41.031 ************************************ 00:06:41.031 END TEST nvmf_host_management 00:06:41.031 ************************************ 00:06:41.031 14:43:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:41.031 14:43:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:41.031 14:43:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:41.031 14:43:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:41.031 ************************************ 00:06:41.031 START TEST nvmf_lvol 00:06:41.031 ************************************ 00:06:41.031 14:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:41.031 * Looking for test storage... 00:06:41.031 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:41.031 14:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:41.031 14:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:06:41.031 14:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:41.290 14:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:41.290 14:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:41.290 14:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:41.290 14:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:41.290 14:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:06:41.290 14:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:06:41.290 14:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:06:41.290 14:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:06:41.290 14:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:06:41.290 14:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:06:41.290 14:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:06:41.290 14:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:41.290 14:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:06:41.290 14:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:06:41.290 14:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:41.290 14:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:41.290 14:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:06:41.290 14:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:06:41.290 14:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:41.290 14:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:06:41.290 14:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:06:41.290 14:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:06:41.290 14:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:06:41.290 14:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:41.290 14:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:06:41.290 14:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:06:41.290 14:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:41.290 14:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:41.290 14:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:06:41.290 14:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:41.290 14:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:41.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.290 --rc genhtml_branch_coverage=1 00:06:41.290 --rc genhtml_function_coverage=1 00:06:41.290 --rc genhtml_legend=1 00:06:41.290 --rc geninfo_all_blocks=1 00:06:41.290 --rc geninfo_unexecuted_blocks=1 00:06:41.290 00:06:41.290 ' 00:06:41.290 14:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:41.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.290 --rc genhtml_branch_coverage=1 00:06:41.291 --rc genhtml_function_coverage=1 00:06:41.291 --rc genhtml_legend=1 00:06:41.291 --rc geninfo_all_blocks=1 00:06:41.291 --rc geninfo_unexecuted_blocks=1 00:06:41.291 00:06:41.291 ' 00:06:41.291 14:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:41.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.291 --rc genhtml_branch_coverage=1 00:06:41.291 --rc genhtml_function_coverage=1 00:06:41.291 --rc genhtml_legend=1 00:06:41.291 --rc geninfo_all_blocks=1 00:06:41.291 --rc geninfo_unexecuted_blocks=1 00:06:41.291 00:06:41.291 ' 00:06:41.291 14:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:41.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.291 --rc genhtml_branch_coverage=1 00:06:41.291 --rc genhtml_function_coverage=1 00:06:41.291 --rc genhtml_legend=1 00:06:41.291 --rc geninfo_all_blocks=1 00:06:41.291 --rc geninfo_unexecuted_blocks=1 00:06:41.291 00:06:41.291 ' 00:06:41.291 14:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:41.291 14:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:06:41.291 14:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:41.291 14:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:41.291 14:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:41.291 14:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:41.291 14:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:41.291 14:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:41.291 14:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:41.291 14:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:41.291 14:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:41.291 14:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:41.291 14:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:41.291 14:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:41.291 14:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:41.291 14:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:41.291 14:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:41.291 14:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:41.291 14:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:41.291 14:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:06:41.291 14:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:41.291 14:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:41.291 14:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:41.291 14:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.291 14:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.291 14:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.291 14:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:06:41.291 14:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.291 14:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:06:41.291 14:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:41.291 14:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:41.291 14:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:41.291 14:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:41.291 14:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:41.291 14:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:41.291 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:41.291 14:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:41.291 14:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:41.291 14:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:41.291 14:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:41.291 14:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:41.291 14:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:06:41.291 14:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:06:41.291 14:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:41.291 14:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:06:41.291 14:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:41.291 14:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:41.291 14:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:41.291 14:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:41.291 14:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:41.291 14:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:41.291 14:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:41.291 14:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:41.291 14:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:41.291 14:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:41.291 14:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:06:41.291 14:43:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:43.823 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:43.823 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:06:43.823 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:43.823 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:43.823 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:43.823 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:43.823 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:43.823 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:06:43.823 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:43.823 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:06:43.823 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:06:43.823 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:06:43.823 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:06:43.823 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:06:43.823 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:06:43.823 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:43.823 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:43.823 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:43.823 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:43.823 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:43.823 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:43.823 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:43.823 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:43.823 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:43.823 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:43.823 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:43.823 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:43.823 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:43.823 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:43.823 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:43.823 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:43.823 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:43.823 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:43.823 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:43.823 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:43.823 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:43.823 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:43.823 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:43.823 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:43.823 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:43.823 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:43.823 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:43.823 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:43.823 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:43.823 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:43.823 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:43.823 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:43.823 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:43.823 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:43.823 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:43.823 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:43.823 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:43.823 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:43.823 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:43.823 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:43.823 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:43.823 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:43.823 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:43.823 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:43.823 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:43.823 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:43.824 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:43.824 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:43.824 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:43.824 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:43.824 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:43.824 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:43.824 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:43.824 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:43.824 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:43.824 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:43.824 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:43.824 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:43.824 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:06:43.824 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:43.824 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:43.824 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:43.824 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:43.824 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:43.824 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:43.824 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:43.824 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:43.824 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:43.824 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:43.824 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:43.824 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:43.824 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:43.824 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:43.824 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:43.824 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:43.824 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:43.824 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:43.824 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:43.824 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:43.824 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:43.824 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:43.824 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:43.824 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:43.824 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:43.824 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:43.824 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:43.824 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.312 ms 00:06:43.824 00:06:43.824 --- 10.0.0.2 ping statistics --- 00:06:43.824 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:43.824 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:06:43.824 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:43.824 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:43.824 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:06:43.824 00:06:43.824 --- 10.0.0.1 ping statistics --- 00:06:43.824 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:43.824 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:06:43.824 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:43.824 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:06:43.824 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:43.824 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:43.824 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:43.824 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:43.824 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:43.824 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:43.824 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:43.824 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:06:43.824 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:43.824 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:43.824 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:43.824 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=571876 00:06:43.824 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:06:43.824 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 571876 00:06:43.824 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 571876 ']' 00:06:43.824 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.824 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:43.824 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.824 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:43.824 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:43.824 [2024-12-11 14:43:26.398617] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:06:43.824 [2024-12-11 14:43:26.398700] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:43.824 [2024-12-11 14:43:26.489578] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:43.824 [2024-12-11 14:43:26.564381] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:43.824 [2024-12-11 14:43:26.564452] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:43.824 [2024-12-11 14:43:26.564477] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:43.824 [2024-12-11 14:43:26.564500] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:43.824 [2024-12-11 14:43:26.564520] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:43.824 [2024-12-11 14:43:26.566395] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:43.824 [2024-12-11 14:43:26.566455] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:43.824 [2024-12-11 14:43:26.566464] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.083 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:44.083 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:06:44.083 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:44.083 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:44.083 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:44.083 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:44.083 14:43:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:44.340 [2024-12-11 14:43:27.041188] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:44.340 14:43:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:44.599 14:43:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:06:44.599 14:43:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:45.164 14:43:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:06:45.165 14:43:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:06:45.165 14:43:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:06:45.730 14:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=d2482a76-1351-434c-b6a7-3ffee7ff2b4c 00:06:45.730 14:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d2482a76-1351-434c-b6a7-3ffee7ff2b4c lvol 20 00:06:45.730 14:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=7d0e67da-ce1e-4326-b278-769ded535ee1 00:06:45.730 14:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:45.988 14:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7d0e67da-ce1e-4326-b278-769ded535ee1 00:06:46.245 14:43:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:46.503 [2024-12-11 14:43:29.241813] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:46.503 14:43:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:47.068 14:43:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=572302 00:06:47.068 14:43:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:06:47.068 14:43:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:06:48.003 14:43:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 7d0e67da-ce1e-4326-b278-769ded535ee1 MY_SNAPSHOT 00:06:48.261 14:43:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=dc8b7ac9-6e3f-413a-a067-7455c5b30fa8 00:06:48.261 14:43:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 7d0e67da-ce1e-4326-b278-769ded535ee1 30 00:06:48.519 14:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone dc8b7ac9-6e3f-413a-a067-7455c5b30fa8 MY_CLONE 00:06:48.776 14:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=86e37405-1035-4464-8233-b581cdb13110 00:06:48.776 14:43:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 86e37405-1035-4464-8233-b581cdb13110 00:06:49.710 14:43:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 572302 00:06:57.821 Initializing NVMe Controllers 00:06:57.821 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:57.821 Controller IO queue size 128, less than required. 00:06:57.821 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:57.821 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:06:57.821 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:06:57.821 Initialization complete. Launching workers. 00:06:57.821 ======================================================== 00:06:57.821 Latency(us) 00:06:57.821 Device Information : IOPS MiB/s Average min max 00:06:57.821 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 9802.50 38.29 13062.85 1667.88 97143.80 00:06:57.822 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10414.80 40.68 12300.90 2246.52 71674.87 00:06:57.822 ======================================================== 00:06:57.822 Total : 20217.30 78.97 12670.34 1667.88 97143.80 00:06:57.822 00:06:57.822 14:43:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:57.822 14:43:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 7d0e67da-ce1e-4326-b278-769ded535ee1 00:06:57.822 14:43:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d2482a76-1351-434c-b6a7-3ffee7ff2b4c 00:06:58.388 14:43:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:06:58.388 14:43:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:06:58.388 14:43:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:06:58.388 14:43:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:58.388 14:43:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:06:58.388 14:43:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:58.388 14:43:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:06:58.388 14:43:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:58.388 14:43:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:58.388 rmmod nvme_tcp 00:06:58.388 rmmod nvme_fabrics 00:06:58.388 rmmod nvme_keyring 00:06:58.388 14:43:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:58.388 14:43:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:06:58.388 14:43:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:06:58.388 14:43:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 571876 ']' 00:06:58.388 14:43:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 571876 00:06:58.388 14:43:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 571876 ']' 00:06:58.388 14:43:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 571876 00:06:58.388 14:43:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:06:58.388 14:43:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:58.388 14:43:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 571876 00:06:58.388 14:43:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:58.388 14:43:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:58.388 14:43:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 571876' 00:06:58.388 killing process with pid 571876 00:06:58.388 14:43:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 571876 00:06:58.388 14:43:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 571876 00:06:58.648 14:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:58.648 14:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:58.648 14:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:58.648 14:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:06:58.648 14:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:06:58.648 14:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:58.648 14:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:06:58.648 14:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:58.648 14:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:58.648 14:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:58.648 14:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:58.648 14:43:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:00.555 14:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:00.555 00:07:00.555 real 0m19.565s 00:07:00.555 user 1m5.536s 00:07:00.555 sys 0m5.919s 00:07:00.555 14:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:00.555 14:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:00.555 ************************************ 00:07:00.555 END TEST nvmf_lvol 00:07:00.555 ************************************ 00:07:00.555 14:43:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:00.555 14:43:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:00.555 14:43:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:00.555 14:43:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:00.814 ************************************ 00:07:00.814 START TEST nvmf_lvs_grow 00:07:00.814 ************************************ 00:07:00.814 14:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:00.814 * Looking for test storage... 00:07:00.814 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:00.814 14:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:00.814 14:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:07:00.814 14:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:00.814 14:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:00.814 14:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:00.814 14:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:00.814 14:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:00.814 14:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:00.814 14:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:00.814 14:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:00.814 14:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:00.814 14:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:00.814 14:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:00.814 14:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:00.814 14:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:00.814 14:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:00.814 14:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:00.814 14:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:00.814 14:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:00.814 14:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:00.814 14:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:00.814 14:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:00.814 14:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:00.814 14:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:00.814 14:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:00.814 14:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:00.814 14:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:00.814 14:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:00.814 14:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:00.814 14:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:00.814 14:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:00.814 14:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:00.814 14:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:00.814 14:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:00.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.814 --rc genhtml_branch_coverage=1 00:07:00.814 --rc genhtml_function_coverage=1 00:07:00.814 --rc genhtml_legend=1 00:07:00.814 --rc geninfo_all_blocks=1 00:07:00.814 --rc geninfo_unexecuted_blocks=1 00:07:00.814 00:07:00.814 ' 00:07:00.814 14:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:00.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.814 --rc genhtml_branch_coverage=1 00:07:00.814 --rc genhtml_function_coverage=1 00:07:00.814 --rc genhtml_legend=1 00:07:00.814 --rc geninfo_all_blocks=1 00:07:00.814 --rc geninfo_unexecuted_blocks=1 00:07:00.814 00:07:00.814 ' 00:07:00.814 14:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:00.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.814 --rc genhtml_branch_coverage=1 00:07:00.814 --rc genhtml_function_coverage=1 00:07:00.814 --rc genhtml_legend=1 00:07:00.814 --rc geninfo_all_blocks=1 00:07:00.814 --rc geninfo_unexecuted_blocks=1 00:07:00.814 00:07:00.814 ' 00:07:00.814 14:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:00.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.814 --rc genhtml_branch_coverage=1 00:07:00.814 --rc genhtml_function_coverage=1 00:07:00.814 --rc genhtml_legend=1 00:07:00.814 --rc geninfo_all_blocks=1 00:07:00.814 --rc geninfo_unexecuted_blocks=1 00:07:00.814 00:07:00.814 ' 00:07:00.814 14:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:00.814 14:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:00.814 14:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:00.814 14:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:00.814 14:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:00.814 14:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:00.814 14:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:00.814 14:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:00.814 14:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:00.814 14:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:00.814 14:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:00.814 14:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:00.814 14:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:00.814 14:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:00.814 14:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:00.814 14:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:00.814 14:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:00.814 14:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:00.815 14:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:00.815 14:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:00.815 14:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:00.815 14:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:00.815 14:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:00.815 14:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.815 14:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.815 14:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.815 14:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:00.815 14:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.815 14:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:07:00.815 14:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:00.815 14:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:00.815 14:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:00.815 14:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:00.815 14:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:00.815 14:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:00.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:00.815 14:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:00.815 14:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:00.815 14:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:00.815 14:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:00.815 14:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:00.815 14:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:00.815 14:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:00.815 14:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:00.815 14:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:00.815 14:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:00.815 14:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:00.815 14:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:00.815 14:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:00.815 14:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:00.815 14:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:00.815 14:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:00.815 14:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:07:00.815 14:43:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:03.347 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:03.347 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:07:03.347 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:03.347 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:03.347 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:03.347 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:03.347 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:03.347 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:07:03.347 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:03.347 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:07:03.347 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:07:03.348 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:07:03.348 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:07:03.348 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:07:03.348 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:07:03.348 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:03.348 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:03.348 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:03.348 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:03.348 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:03.348 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:03.348 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:03.348 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:03.348 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:03.348 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:03.348 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:03.348 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:03.348 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:03.348 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:03.348 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:03.348 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:03.348 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:03.348 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:03.348 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:03.348 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:03.348 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:03.348 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:03.348 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:03.348 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:03.348 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:03.348 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:03.348 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:03.348 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:03.348 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:03.348 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:03.348 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:03.348 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:03.348 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:03.348 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:03.348 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:03.348 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:03.348 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:03.348 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:03.348 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:03.348 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:03.348 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:03.348 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:03.348 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:03.348 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:03.348 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:03.348 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:03.348 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:03.348 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:03.348 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:03.348 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:03.348 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:03.348 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:03.348 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:03.348 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:03.348 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:03.348 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:03.348 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:03.348 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:03.348 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:07:03.348 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:03.348 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:03.348 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:03.348 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:03.348 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:03.348 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:03.348 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:03.348 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:03.348 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:03.348 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:03.348 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:03.348 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:03.348 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:03.348 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:03.348 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:03.348 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:03.348 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:03.348 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:03.348 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:03.348 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:03.348 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:03.348 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:03.348 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:03.348 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:03.348 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:03.348 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:03.348 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:03.348 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.372 ms 00:07:03.348 00:07:03.348 --- 10.0.0.2 ping statistics --- 00:07:03.348 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:03.348 rtt min/avg/max/mdev = 0.372/0.372/0.372/0.000 ms 00:07:03.348 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:03.348 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:03.348 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:07:03.348 00:07:03.348 --- 10.0.0.1 ping statistics --- 00:07:03.348 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:03.348 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:07:03.348 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:03.348 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:07:03.348 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:03.348 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:03.348 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:03.348 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:03.348 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:03.348 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:03.348 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:03.348 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:03.348 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:03.348 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:03.348 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:03.348 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=575608 00:07:03.348 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:03.349 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 575608 00:07:03.349 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 575608 ']' 00:07:03.349 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:03.349 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:03.349 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:03.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:03.349 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:03.349 14:43:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:03.349 [2024-12-11 14:43:45.898970] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:07:03.349 [2024-12-11 14:43:45.899054] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:03.349 [2024-12-11 14:43:45.971271] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.349 [2024-12-11 14:43:46.029597] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:03.349 [2024-12-11 14:43:46.029658] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:03.349 [2024-12-11 14:43:46.029672] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:03.349 [2024-12-11 14:43:46.029683] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:03.349 [2024-12-11 14:43:46.029692] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:03.349 [2024-12-11 14:43:46.030286] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.607 14:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:03.607 14:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:07:03.607 14:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:03.607 14:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:03.607 14:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:03.607 14:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:03.607 14:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:03.865 [2024-12-11 14:43:46.432617] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:03.865 14:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:03.865 14:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:03.865 14:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:03.865 14:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:03.865 ************************************ 00:07:03.865 START TEST lvs_grow_clean 00:07:03.865 ************************************ 00:07:03.865 14:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:07:03.865 14:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:03.865 14:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:03.865 14:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:03.865 14:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:03.865 14:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:03.865 14:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:03.865 14:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:03.865 14:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:03.865 14:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:04.123 14:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:04.123 14:43:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:04.382 14:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=fc2abf17-3444-4986-88bd-9c1071f84778 00:07:04.382 14:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fc2abf17-3444-4986-88bd-9c1071f84778 00:07:04.382 14:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:04.640 14:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:04.640 14:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:04.640 14:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u fc2abf17-3444-4986-88bd-9c1071f84778 lvol 150 00:07:04.899 14:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=00637b47-ab1e-4c90-ae65-8f0747f76dab 00:07:04.899 14:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:04.899 14:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:05.157 [2024-12-11 14:43:47.818881] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:05.157 [2024-12-11 14:43:47.818961] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:05.157 true 00:07:05.157 14:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fc2abf17-3444-4986-88bd-9c1071f84778 00:07:05.157 14:43:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:05.416 14:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:05.416 14:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:05.675 14:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 00637b47-ab1e-4c90-ae65-8f0747f76dab 00:07:05.933 14:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:06.192 [2024-12-11 14:43:48.898212] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:06.192 14:43:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:06.450 14:43:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=576048 00:07:06.450 14:43:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:06.450 14:43:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 576048 /var/tmp/bdevperf.sock 00:07:06.450 14:43:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 576048 ']' 00:07:06.450 14:43:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:06.450 14:43:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:06.451 14:43:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:06.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:06.451 14:43:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:06.451 14:43:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:06.451 14:43:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:06.710 [2024-12-11 14:43:49.232986] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:07:06.710 [2024-12-11 14:43:49.233065] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid576048 ] 00:07:06.710 [2024-12-11 14:43:49.299189] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.710 [2024-12-11 14:43:49.356917] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:06.710 14:43:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:06.710 14:43:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:07:06.710 14:43:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:07.276 Nvme0n1 00:07:07.276 14:43:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:07.535 [ 00:07:07.535 { 00:07:07.535 "name": "Nvme0n1", 00:07:07.535 "aliases": [ 00:07:07.535 "00637b47-ab1e-4c90-ae65-8f0747f76dab" 00:07:07.535 ], 00:07:07.535 "product_name": "NVMe disk", 00:07:07.535 "block_size": 4096, 00:07:07.535 "num_blocks": 38912, 00:07:07.535 "uuid": "00637b47-ab1e-4c90-ae65-8f0747f76dab", 00:07:07.535 "numa_id": 0, 00:07:07.535 "assigned_rate_limits": { 00:07:07.535 "rw_ios_per_sec": 0, 00:07:07.535 "rw_mbytes_per_sec": 0, 00:07:07.535 "r_mbytes_per_sec": 0, 00:07:07.535 "w_mbytes_per_sec": 0 00:07:07.535 }, 00:07:07.535 "claimed": false, 00:07:07.535 "zoned": false, 00:07:07.535 "supported_io_types": { 00:07:07.535 "read": true, 00:07:07.535 "write": true, 00:07:07.535 "unmap": true, 00:07:07.535 "flush": true, 00:07:07.535 "reset": true, 00:07:07.535 "nvme_admin": true, 00:07:07.535 "nvme_io": true, 00:07:07.535 "nvme_io_md": false, 00:07:07.535 "write_zeroes": true, 00:07:07.535 "zcopy": false, 00:07:07.535 "get_zone_info": false, 00:07:07.535 "zone_management": false, 00:07:07.535 "zone_append": false, 00:07:07.535 "compare": true, 00:07:07.535 "compare_and_write": true, 00:07:07.535 "abort": true, 00:07:07.535 "seek_hole": false, 00:07:07.535 "seek_data": false, 00:07:07.535 "copy": true, 00:07:07.535 "nvme_iov_md": false 00:07:07.535 }, 00:07:07.535 "memory_domains": [ 00:07:07.535 { 00:07:07.535 "dma_device_id": "system", 00:07:07.536 "dma_device_type": 1 00:07:07.536 } 00:07:07.536 ], 00:07:07.536 "driver_specific": { 00:07:07.536 "nvme": [ 00:07:07.536 { 00:07:07.536 "trid": { 00:07:07.536 "trtype": "TCP", 00:07:07.536 "adrfam": "IPv4", 00:07:07.536 "traddr": "10.0.0.2", 00:07:07.536 "trsvcid": "4420", 00:07:07.536 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:07.536 }, 00:07:07.536 "ctrlr_data": { 00:07:07.536 "cntlid": 1, 00:07:07.536 "vendor_id": "0x8086", 00:07:07.536 "model_number": "SPDK bdev Controller", 00:07:07.536 "serial_number": "SPDK0", 00:07:07.536 "firmware_revision": "25.01", 00:07:07.536 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:07.536 "oacs": { 00:07:07.536 "security": 0, 00:07:07.536 "format": 0, 00:07:07.536 "firmware": 0, 00:07:07.536 "ns_manage": 0 00:07:07.536 }, 00:07:07.536 "multi_ctrlr": true, 00:07:07.536 "ana_reporting": false 00:07:07.536 }, 00:07:07.536 "vs": { 00:07:07.536 "nvme_version": "1.3" 00:07:07.536 }, 00:07:07.536 "ns_data": { 00:07:07.536 "id": 1, 00:07:07.536 "can_share": true 00:07:07.536 } 00:07:07.536 } 00:07:07.536 ], 00:07:07.536 "mp_policy": "active_passive" 00:07:07.536 } 00:07:07.536 } 00:07:07.536 ] 00:07:07.536 14:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=576175 00:07:07.536 14:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:07.536 14:43:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:07.536 Running I/O for 10 seconds... 00:07:08.471 Latency(us) 00:07:08.471 [2024-12-11T13:43:51.244Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:08.471 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:08.471 Nvme0n1 : 1.00 14924.00 58.30 0.00 0.00 0.00 0.00 0.00 00:07:08.471 [2024-12-11T13:43:51.244Z] =================================================================================================================== 00:07:08.471 [2024-12-11T13:43:51.244Z] Total : 14924.00 58.30 0.00 0.00 0.00 0.00 0.00 00:07:08.471 00:07:09.407 14:43:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u fc2abf17-3444-4986-88bd-9c1071f84778 00:07:09.666 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:09.666 Nvme0n1 : 2.00 15177.50 59.29 0.00 0.00 0.00 0.00 0.00 00:07:09.666 [2024-12-11T13:43:52.439Z] =================================================================================================================== 00:07:09.666 [2024-12-11T13:43:52.439Z] Total : 15177.50 59.29 0.00 0.00 0.00 0.00 0.00 00:07:09.666 00:07:09.666 true 00:07:09.666 14:43:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fc2abf17-3444-4986-88bd-9c1071f84778 00:07:09.666 14:43:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:09.924 14:43:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:09.924 14:43:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:09.924 14:43:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 576175 00:07:10.490 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:10.490 Nvme0n1 : 3.00 15304.67 59.78 0.00 0.00 0.00 0.00 0.00 00:07:10.490 [2024-12-11T13:43:53.263Z] =================================================================================================================== 00:07:10.490 [2024-12-11T13:43:53.263Z] Total : 15304.67 59.78 0.00 0.00 0.00 0.00 0.00 00:07:10.490 00:07:11.869 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:11.869 Nvme0n1 : 4.00 15383.75 60.09 0.00 0.00 0.00 0.00 0.00 00:07:11.869 [2024-12-11T13:43:54.642Z] =================================================================================================================== 00:07:11.869 [2024-12-11T13:43:54.642Z] Total : 15383.75 60.09 0.00 0.00 0.00 0.00 0.00 00:07:11.869 00:07:12.805 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:12.805 Nvme0n1 : 5.00 15456.60 60.38 0.00 0.00 0.00 0.00 0.00 00:07:12.805 [2024-12-11T13:43:55.578Z] =================================================================================================================== 00:07:12.805 [2024-12-11T13:43:55.578Z] Total : 15456.60 60.38 0.00 0.00 0.00 0.00 0.00 00:07:12.805 00:07:13.740 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:13.740 Nvme0n1 : 6.00 15516.00 60.61 0.00 0.00 0.00 0.00 0.00 00:07:13.740 [2024-12-11T13:43:56.513Z] =================================================================================================================== 00:07:13.740 [2024-12-11T13:43:56.513Z] Total : 15516.00 60.61 0.00 0.00 0.00 0.00 0.00 00:07:13.740 00:07:14.676 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:14.676 Nvme0n1 : 7.00 15558.29 60.77 0.00 0.00 0.00 0.00 0.00 00:07:14.676 [2024-12-11T13:43:57.449Z] =================================================================================================================== 00:07:14.676 [2024-12-11T13:43:57.449Z] Total : 15558.29 60.77 0.00 0.00 0.00 0.00 0.00 00:07:14.676 00:07:15.614 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:15.614 Nvme0n1 : 8.00 15598.00 60.93 0.00 0.00 0.00 0.00 0.00 00:07:15.614 [2024-12-11T13:43:58.387Z] =================================================================================================================== 00:07:15.614 [2024-12-11T13:43:58.387Z] Total : 15598.00 60.93 0.00 0.00 0.00 0.00 0.00 00:07:15.614 00:07:16.550 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:16.550 Nvme0n1 : 9.00 15628.78 61.05 0.00 0.00 0.00 0.00 0.00 00:07:16.550 [2024-12-11T13:43:59.323Z] =================================================================================================================== 00:07:16.550 [2024-12-11T13:43:59.323Z] Total : 15628.78 61.05 0.00 0.00 0.00 0.00 0.00 00:07:16.550 00:07:17.485 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:17.485 Nvme0n1 : 10.00 15640.70 61.10 0.00 0.00 0.00 0.00 0.00 00:07:17.485 [2024-12-11T13:44:00.258Z] =================================================================================================================== 00:07:17.485 [2024-12-11T13:44:00.258Z] Total : 15640.70 61.10 0.00 0.00 0.00 0.00 0.00 00:07:17.485 00:07:17.485 00:07:17.485 Latency(us) 00:07:17.486 [2024-12-11T13:44:00.259Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:17.486 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:17.486 Nvme0n1 : 10.01 15639.26 61.09 0.00 0.00 8178.51 4271.98 16311.18 00:07:17.486 [2024-12-11T13:44:00.259Z] =================================================================================================================== 00:07:17.486 [2024-12-11T13:44:00.259Z] Total : 15639.26 61.09 0.00 0.00 8178.51 4271.98 16311.18 00:07:17.486 { 00:07:17.486 "results": [ 00:07:17.486 { 00:07:17.486 "job": "Nvme0n1", 00:07:17.486 "core_mask": "0x2", 00:07:17.486 "workload": "randwrite", 00:07:17.486 "status": "finished", 00:07:17.486 "queue_depth": 128, 00:07:17.486 "io_size": 4096, 00:07:17.486 "runtime": 10.005075, 00:07:17.486 "iops": 15639.26307398995, 00:07:17.486 "mibps": 61.090871382773244, 00:07:17.486 "io_failed": 0, 00:07:17.486 "io_timeout": 0, 00:07:17.486 "avg_latency_us": 8178.507530453916, 00:07:17.486 "min_latency_us": 4271.976296296296, 00:07:17.486 "max_latency_us": 16311.182222222222 00:07:17.486 } 00:07:17.486 ], 00:07:17.486 "core_count": 1 00:07:17.486 } 00:07:17.486 14:44:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 576048 00:07:17.486 14:44:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 576048 ']' 00:07:17.486 14:44:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 576048 00:07:17.744 14:44:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:07:17.744 14:44:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:17.744 14:44:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 576048 00:07:17.744 14:44:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:17.744 14:44:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:17.744 14:44:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 576048' 00:07:17.744 killing process with pid 576048 00:07:17.744 14:44:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 576048 00:07:17.744 Received shutdown signal, test time was about 10.000000 seconds 00:07:17.744 00:07:17.744 Latency(us) 00:07:17.744 [2024-12-11T13:44:00.517Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:17.744 [2024-12-11T13:44:00.517Z] =================================================================================================================== 00:07:17.744 [2024-12-11T13:44:00.517Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:17.744 14:44:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 576048 00:07:18.002 14:44:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:18.260 14:44:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:18.517 14:44:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fc2abf17-3444-4986-88bd-9c1071f84778 00:07:18.517 14:44:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:18.776 14:44:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:18.776 14:44:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:18.776 14:44:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:19.034 [2024-12-11 14:44:01.593411] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:19.034 14:44:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fc2abf17-3444-4986-88bd-9c1071f84778 00:07:19.034 14:44:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:07:19.034 14:44:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fc2abf17-3444-4986-88bd-9c1071f84778 00:07:19.034 14:44:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:19.034 14:44:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:19.034 14:44:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:19.034 14:44:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:19.034 14:44:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:19.034 14:44:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:19.034 14:44:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:19.034 14:44:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:19.034 14:44:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fc2abf17-3444-4986-88bd-9c1071f84778 00:07:19.292 request: 00:07:19.292 { 00:07:19.292 "uuid": "fc2abf17-3444-4986-88bd-9c1071f84778", 00:07:19.292 "method": "bdev_lvol_get_lvstores", 00:07:19.292 "req_id": 1 00:07:19.292 } 00:07:19.292 Got JSON-RPC error response 00:07:19.292 response: 00:07:19.292 { 00:07:19.292 "code": -19, 00:07:19.292 "message": "No such device" 00:07:19.292 } 00:07:19.292 14:44:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:07:19.293 14:44:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:19.293 14:44:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:19.293 14:44:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:19.293 14:44:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:19.551 aio_bdev 00:07:19.551 14:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 00637b47-ab1e-4c90-ae65-8f0747f76dab 00:07:19.551 14:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=00637b47-ab1e-4c90-ae65-8f0747f76dab 00:07:19.551 14:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:19.551 14:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:07:19.551 14:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:19.551 14:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:19.551 14:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:19.809 14:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 00637b47-ab1e-4c90-ae65-8f0747f76dab -t 2000 00:07:20.067 [ 00:07:20.067 { 00:07:20.067 "name": "00637b47-ab1e-4c90-ae65-8f0747f76dab", 00:07:20.067 "aliases": [ 00:07:20.067 "lvs/lvol" 00:07:20.067 ], 00:07:20.067 "product_name": "Logical Volume", 00:07:20.067 "block_size": 4096, 00:07:20.067 "num_blocks": 38912, 00:07:20.068 "uuid": "00637b47-ab1e-4c90-ae65-8f0747f76dab", 00:07:20.068 "assigned_rate_limits": { 00:07:20.068 "rw_ios_per_sec": 0, 00:07:20.068 "rw_mbytes_per_sec": 0, 00:07:20.068 "r_mbytes_per_sec": 0, 00:07:20.068 "w_mbytes_per_sec": 0 00:07:20.068 }, 00:07:20.068 "claimed": false, 00:07:20.068 "zoned": false, 00:07:20.068 "supported_io_types": { 00:07:20.068 "read": true, 00:07:20.068 "write": true, 00:07:20.068 "unmap": true, 00:07:20.068 "flush": false, 00:07:20.068 "reset": true, 00:07:20.068 "nvme_admin": false, 00:07:20.068 "nvme_io": false, 00:07:20.068 "nvme_io_md": false, 00:07:20.068 "write_zeroes": true, 00:07:20.068 "zcopy": false, 00:07:20.068 "get_zone_info": false, 00:07:20.068 "zone_management": false, 00:07:20.068 "zone_append": false, 00:07:20.068 "compare": false, 00:07:20.068 "compare_and_write": false, 00:07:20.068 "abort": false, 00:07:20.068 "seek_hole": true, 00:07:20.068 "seek_data": true, 00:07:20.068 "copy": false, 00:07:20.068 "nvme_iov_md": false 00:07:20.068 }, 00:07:20.068 "driver_specific": { 00:07:20.068 "lvol": { 00:07:20.068 "lvol_store_uuid": "fc2abf17-3444-4986-88bd-9c1071f84778", 00:07:20.068 "base_bdev": "aio_bdev", 00:07:20.068 "thin_provision": false, 00:07:20.068 "num_allocated_clusters": 38, 00:07:20.068 "snapshot": false, 00:07:20.068 "clone": false, 00:07:20.068 "esnap_clone": false 00:07:20.068 } 00:07:20.068 } 00:07:20.068 } 00:07:20.068 ] 00:07:20.068 14:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:07:20.068 14:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fc2abf17-3444-4986-88bd-9c1071f84778 00:07:20.068 14:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:20.326 14:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:20.326 14:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fc2abf17-3444-4986-88bd-9c1071f84778 00:07:20.326 14:44:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:20.585 14:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:20.585 14:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 00637b47-ab1e-4c90-ae65-8f0747f76dab 00:07:20.843 14:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u fc2abf17-3444-4986-88bd-9c1071f84778 00:07:21.101 14:44:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:21.360 14:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:21.360 00:07:21.360 real 0m17.598s 00:07:21.360 user 0m17.209s 00:07:21.360 sys 0m1.784s 00:07:21.360 14:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:21.360 14:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:21.360 ************************************ 00:07:21.360 END TEST lvs_grow_clean 00:07:21.360 ************************************ 00:07:21.360 14:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:21.360 14:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:21.360 14:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:21.360 14:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:21.360 ************************************ 00:07:21.360 START TEST lvs_grow_dirty 00:07:21.360 ************************************ 00:07:21.360 14:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:07:21.360 14:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:21.360 14:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:21.360 14:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:21.360 14:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:21.360 14:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:21.360 14:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:21.360 14:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:21.360 14:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:21.360 14:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:21.927 14:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:21.927 14:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:21.927 14:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=fde15098-3f71-46d7-a13d-38a8f884ea2e 00:07:21.927 14:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fde15098-3f71-46d7-a13d-38a8f884ea2e 00:07:21.927 14:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:22.185 14:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:22.185 14:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:22.185 14:44:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u fde15098-3f71-46d7-a13d-38a8f884ea2e lvol 150 00:07:22.752 14:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=203ffd67-ce3b-45c0-b4ed-1c88166bcda6 00:07:22.752 14:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:22.753 14:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:22.753 [2024-12-11 14:44:05.473895] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:22.753 [2024-12-11 14:44:05.473979] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:22.753 true 00:07:22.753 14:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:22.753 14:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fde15098-3f71-46d7-a13d-38a8f884ea2e 00:07:23.012 14:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:23.012 14:44:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:23.602 14:44:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 203ffd67-ce3b-45c0-b4ed-1c88166bcda6 00:07:23.602 14:44:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:23.915 [2024-12-11 14:44:06.573255] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:23.915 14:44:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:24.205 14:44:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=578239 00:07:24.205 14:44:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:24.205 14:44:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:24.205 14:44:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 578239 /var/tmp/bdevperf.sock 00:07:24.205 14:44:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 578239 ']' 00:07:24.205 14:44:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:24.205 14:44:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:24.205 14:44:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:24.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:24.205 14:44:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:24.205 14:44:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:24.205 [2024-12-11 14:44:06.904028] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:07:24.205 [2024-12-11 14:44:06.904102] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid578239 ] 00:07:24.489 [2024-12-11 14:44:06.972934] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.489 [2024-12-11 14:44:07.029632] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:24.489 14:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:24.489 14:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:24.489 14:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:24.747 Nvme0n1 00:07:24.747 14:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:25.005 [ 00:07:25.005 { 00:07:25.005 "name": "Nvme0n1", 00:07:25.005 "aliases": [ 00:07:25.005 "203ffd67-ce3b-45c0-b4ed-1c88166bcda6" 00:07:25.005 ], 00:07:25.005 "product_name": "NVMe disk", 00:07:25.005 "block_size": 4096, 00:07:25.005 "num_blocks": 38912, 00:07:25.005 "uuid": "203ffd67-ce3b-45c0-b4ed-1c88166bcda6", 00:07:25.005 "numa_id": 0, 00:07:25.005 "assigned_rate_limits": { 00:07:25.005 "rw_ios_per_sec": 0, 00:07:25.005 "rw_mbytes_per_sec": 0, 00:07:25.005 "r_mbytes_per_sec": 0, 00:07:25.005 "w_mbytes_per_sec": 0 00:07:25.005 }, 00:07:25.005 "claimed": false, 00:07:25.005 "zoned": false, 00:07:25.005 "supported_io_types": { 00:07:25.005 "read": true, 00:07:25.005 "write": true, 00:07:25.005 "unmap": true, 00:07:25.005 "flush": true, 00:07:25.005 "reset": true, 00:07:25.005 "nvme_admin": true, 00:07:25.005 "nvme_io": true, 00:07:25.005 "nvme_io_md": false, 00:07:25.005 "write_zeroes": true, 00:07:25.005 "zcopy": false, 00:07:25.005 "get_zone_info": false, 00:07:25.005 "zone_management": false, 00:07:25.005 "zone_append": false, 00:07:25.005 "compare": true, 00:07:25.005 "compare_and_write": true, 00:07:25.005 "abort": true, 00:07:25.005 "seek_hole": false, 00:07:25.005 "seek_data": false, 00:07:25.005 "copy": true, 00:07:25.005 "nvme_iov_md": false 00:07:25.005 }, 00:07:25.005 "memory_domains": [ 00:07:25.005 { 00:07:25.005 "dma_device_id": "system", 00:07:25.005 "dma_device_type": 1 00:07:25.005 } 00:07:25.005 ], 00:07:25.005 "driver_specific": { 00:07:25.005 "nvme": [ 00:07:25.005 { 00:07:25.005 "trid": { 00:07:25.005 "trtype": "TCP", 00:07:25.005 "adrfam": "IPv4", 00:07:25.005 "traddr": "10.0.0.2", 00:07:25.005 "trsvcid": "4420", 00:07:25.005 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:25.005 }, 00:07:25.005 "ctrlr_data": { 00:07:25.005 "cntlid": 1, 00:07:25.005 "vendor_id": "0x8086", 00:07:25.005 "model_number": "SPDK bdev Controller", 00:07:25.005 "serial_number": "SPDK0", 00:07:25.005 "firmware_revision": "25.01", 00:07:25.005 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:25.005 "oacs": { 00:07:25.005 "security": 0, 00:07:25.005 "format": 0, 00:07:25.005 "firmware": 0, 00:07:25.005 "ns_manage": 0 00:07:25.005 }, 00:07:25.005 "multi_ctrlr": true, 00:07:25.005 "ana_reporting": false 00:07:25.005 }, 00:07:25.005 "vs": { 00:07:25.005 "nvme_version": "1.3" 00:07:25.005 }, 00:07:25.005 "ns_data": { 00:07:25.005 "id": 1, 00:07:25.005 "can_share": true 00:07:25.005 } 00:07:25.005 } 00:07:25.005 ], 00:07:25.005 "mp_policy": "active_passive" 00:07:25.005 } 00:07:25.005 } 00:07:25.005 ] 00:07:25.005 14:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=578259 00:07:25.005 14:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:25.005 14:44:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:25.264 Running I/O for 10 seconds... 00:07:26.198 Latency(us) 00:07:26.198 [2024-12-11T13:44:08.971Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:26.198 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:26.198 Nvme0n1 : 1.00 14987.00 58.54 0.00 0.00 0.00 0.00 0.00 00:07:26.198 [2024-12-11T13:44:08.971Z] =================================================================================================================== 00:07:26.198 [2024-12-11T13:44:08.971Z] Total : 14987.00 58.54 0.00 0.00 0.00 0.00 0.00 00:07:26.198 00:07:27.132 14:44:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u fde15098-3f71-46d7-a13d-38a8f884ea2e 00:07:27.132 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:27.132 Nvme0n1 : 2.00 15212.50 59.42 0.00 0.00 0.00 0.00 0.00 00:07:27.132 [2024-12-11T13:44:09.905Z] =================================================================================================================== 00:07:27.132 [2024-12-11T13:44:09.905Z] Total : 15212.50 59.42 0.00 0.00 0.00 0.00 0.00 00:07:27.132 00:07:27.390 true 00:07:27.390 14:44:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fde15098-3f71-46d7-a13d-38a8f884ea2e 00:07:27.390 14:44:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:27.649 14:44:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:27.649 14:44:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:27.649 14:44:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 578259 00:07:28.215 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:28.215 Nvme0n1 : 3.00 15221.67 59.46 0.00 0.00 0.00 0.00 0.00 00:07:28.215 [2024-12-11T13:44:10.988Z] =================================================================================================================== 00:07:28.215 [2024-12-11T13:44:10.988Z] Total : 15221.67 59.46 0.00 0.00 0.00 0.00 0.00 00:07:28.215 00:07:29.149 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:29.149 Nvme0n1 : 4.00 15321.50 59.85 0.00 0.00 0.00 0.00 0.00 00:07:29.149 [2024-12-11T13:44:11.922Z] =================================================================================================================== 00:07:29.149 [2024-12-11T13:44:11.922Z] Total : 15321.50 59.85 0.00 0.00 0.00 0.00 0.00 00:07:29.149 00:07:30.523 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:30.523 Nvme0n1 : 5.00 15406.80 60.18 0.00 0.00 0.00 0.00 0.00 00:07:30.523 [2024-12-11T13:44:13.296Z] =================================================================================================================== 00:07:30.523 [2024-12-11T13:44:13.296Z] Total : 15406.80 60.18 0.00 0.00 0.00 0.00 0.00 00:07:30.523 00:07:31.457 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:31.457 Nvme0n1 : 6.00 15463.67 60.40 0.00 0.00 0.00 0.00 0.00 00:07:31.457 [2024-12-11T13:44:14.230Z] =================================================================================================================== 00:07:31.457 [2024-12-11T13:44:14.230Z] Total : 15463.67 60.40 0.00 0.00 0.00 0.00 0.00 00:07:31.457 00:07:32.395 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:32.395 Nvme0n1 : 7.00 15504.29 60.56 0.00 0.00 0.00 0.00 0.00 00:07:32.395 [2024-12-11T13:44:15.168Z] =================================================================================================================== 00:07:32.395 [2024-12-11T13:44:15.168Z] Total : 15504.29 60.56 0.00 0.00 0.00 0.00 0.00 00:07:32.395 00:07:33.330 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:33.330 Nvme0n1 : 8.00 15558.75 60.78 0.00 0.00 0.00 0.00 0.00 00:07:33.330 [2024-12-11T13:44:16.103Z] =================================================================================================================== 00:07:33.330 [2024-12-11T13:44:16.103Z] Total : 15558.75 60.78 0.00 0.00 0.00 0.00 0.00 00:07:33.330 00:07:34.261 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:34.261 Nvme0n1 : 9.00 15594.56 60.92 0.00 0.00 0.00 0.00 0.00 00:07:34.261 [2024-12-11T13:44:17.034Z] =================================================================================================================== 00:07:34.261 [2024-12-11T13:44:17.034Z] Total : 15594.56 60.92 0.00 0.00 0.00 0.00 0.00 00:07:34.261 00:07:35.195 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:35.195 Nvme0n1 : 10.00 15622.90 61.03 0.00 0.00 0.00 0.00 0.00 00:07:35.195 [2024-12-11T13:44:17.968Z] =================================================================================================================== 00:07:35.195 [2024-12-11T13:44:17.968Z] Total : 15622.90 61.03 0.00 0.00 0.00 0.00 0.00 00:07:35.195 00:07:35.195 00:07:35.195 Latency(us) 00:07:35.195 [2024-12-11T13:44:17.968Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:35.195 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:35.195 Nvme0n1 : 10.01 15627.85 61.05 0.00 0.00 8185.70 5121.52 20680.25 00:07:35.195 [2024-12-11T13:44:17.968Z] =================================================================================================================== 00:07:35.195 [2024-12-11T13:44:17.968Z] Total : 15627.85 61.05 0.00 0.00 8185.70 5121.52 20680.25 00:07:35.195 { 00:07:35.195 "results": [ 00:07:35.195 { 00:07:35.195 "job": "Nvme0n1", 00:07:35.195 "core_mask": "0x2", 00:07:35.195 "workload": "randwrite", 00:07:35.195 "status": "finished", 00:07:35.195 "queue_depth": 128, 00:07:35.195 "io_size": 4096, 00:07:35.195 "runtime": 10.005024, 00:07:35.195 "iops": 15627.848568878995, 00:07:35.195 "mibps": 61.046283472183575, 00:07:35.195 "io_failed": 0, 00:07:35.195 "io_timeout": 0, 00:07:35.195 "avg_latency_us": 8185.704233715862, 00:07:35.195 "min_latency_us": 5121.517037037037, 00:07:35.195 "max_latency_us": 20680.248888888887 00:07:35.195 } 00:07:35.195 ], 00:07:35.195 "core_count": 1 00:07:35.195 } 00:07:35.195 14:44:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 578239 00:07:35.195 14:44:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 578239 ']' 00:07:35.195 14:44:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 578239 00:07:35.195 14:44:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:07:35.195 14:44:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:35.195 14:44:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 578239 00:07:35.454 14:44:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:35.454 14:44:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:35.454 14:44:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 578239' 00:07:35.454 killing process with pid 578239 00:07:35.454 14:44:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 578239 00:07:35.454 Received shutdown signal, test time was about 10.000000 seconds 00:07:35.454 00:07:35.454 Latency(us) 00:07:35.454 [2024-12-11T13:44:18.227Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:35.454 [2024-12-11T13:44:18.227Z] =================================================================================================================== 00:07:35.454 [2024-12-11T13:44:18.227Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:35.454 14:44:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 578239 00:07:35.454 14:44:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:35.712 14:44:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:35.970 14:44:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fde15098-3f71-46d7-a13d-38a8f884ea2e 00:07:35.970 14:44:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:36.536 14:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:36.536 14:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:07:36.536 14:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 575608 00:07:36.536 14:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 575608 00:07:36.536 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 575608 Killed "${NVMF_APP[@]}" "$@" 00:07:36.536 14:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:07:36.536 14:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:07:36.536 14:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:36.536 14:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:36.536 14:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:36.536 14:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=579601 00:07:36.536 14:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 579601 00:07:36.536 14:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:36.536 14:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 579601 ']' 00:07:36.536 14:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.536 14:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:36.536 14:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.536 14:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:36.536 14:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:36.536 [2024-12-11 14:44:19.095512] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:07:36.536 [2024-12-11 14:44:19.095627] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:36.536 [2024-12-11 14:44:19.169378] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.536 [2024-12-11 14:44:19.227675] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:36.536 [2024-12-11 14:44:19.227735] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:36.536 [2024-12-11 14:44:19.227749] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:36.536 [2024-12-11 14:44:19.227760] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:36.536 [2024-12-11 14:44:19.227769] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:36.536 [2024-12-11 14:44:19.228341] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.794 14:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:36.794 14:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:36.794 14:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:36.794 14:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:36.794 14:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:36.794 14:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:36.794 14:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:37.053 [2024-12-11 14:44:19.607503] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:07:37.053 [2024-12-11 14:44:19.607639] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:07:37.053 [2024-12-11 14:44:19.607692] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:07:37.053 14:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:07:37.053 14:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 203ffd67-ce3b-45c0-b4ed-1c88166bcda6 00:07:37.053 14:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=203ffd67-ce3b-45c0-b4ed-1c88166bcda6 00:07:37.053 14:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:37.053 14:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:37.053 14:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:37.053 14:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:37.053 14:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:37.311 14:44:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 203ffd67-ce3b-45c0-b4ed-1c88166bcda6 -t 2000 00:07:37.568 [ 00:07:37.568 { 00:07:37.568 "name": "203ffd67-ce3b-45c0-b4ed-1c88166bcda6", 00:07:37.568 "aliases": [ 00:07:37.568 "lvs/lvol" 00:07:37.568 ], 00:07:37.568 "product_name": "Logical Volume", 00:07:37.568 "block_size": 4096, 00:07:37.568 "num_blocks": 38912, 00:07:37.568 "uuid": "203ffd67-ce3b-45c0-b4ed-1c88166bcda6", 00:07:37.568 "assigned_rate_limits": { 00:07:37.568 "rw_ios_per_sec": 0, 00:07:37.568 "rw_mbytes_per_sec": 0, 00:07:37.568 "r_mbytes_per_sec": 0, 00:07:37.568 "w_mbytes_per_sec": 0 00:07:37.568 }, 00:07:37.568 "claimed": false, 00:07:37.568 "zoned": false, 00:07:37.568 "supported_io_types": { 00:07:37.568 "read": true, 00:07:37.568 "write": true, 00:07:37.568 "unmap": true, 00:07:37.568 "flush": false, 00:07:37.568 "reset": true, 00:07:37.568 "nvme_admin": false, 00:07:37.568 "nvme_io": false, 00:07:37.568 "nvme_io_md": false, 00:07:37.568 "write_zeroes": true, 00:07:37.568 "zcopy": false, 00:07:37.568 "get_zone_info": false, 00:07:37.568 "zone_management": false, 00:07:37.568 "zone_append": false, 00:07:37.568 "compare": false, 00:07:37.568 "compare_and_write": false, 00:07:37.568 "abort": false, 00:07:37.568 "seek_hole": true, 00:07:37.568 "seek_data": true, 00:07:37.568 "copy": false, 00:07:37.568 "nvme_iov_md": false 00:07:37.568 }, 00:07:37.568 "driver_specific": { 00:07:37.568 "lvol": { 00:07:37.568 "lvol_store_uuid": "fde15098-3f71-46d7-a13d-38a8f884ea2e", 00:07:37.568 "base_bdev": "aio_bdev", 00:07:37.569 "thin_provision": false, 00:07:37.569 "num_allocated_clusters": 38, 00:07:37.569 "snapshot": false, 00:07:37.569 "clone": false, 00:07:37.569 "esnap_clone": false 00:07:37.569 } 00:07:37.569 } 00:07:37.569 } 00:07:37.569 ] 00:07:37.569 14:44:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:37.569 14:44:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fde15098-3f71-46d7-a13d-38a8f884ea2e 00:07:37.569 14:44:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:07:37.827 14:44:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:07:37.827 14:44:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fde15098-3f71-46d7-a13d-38a8f884ea2e 00:07:37.827 14:44:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:07:38.085 14:44:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:07:38.085 14:44:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:38.343 [2024-12-11 14:44:21.013470] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:38.343 14:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fde15098-3f71-46d7-a13d-38a8f884ea2e 00:07:38.343 14:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:07:38.343 14:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fde15098-3f71-46d7-a13d-38a8f884ea2e 00:07:38.343 14:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:38.343 14:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:38.343 14:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:38.343 14:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:38.343 14:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:38.343 14:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:38.343 14:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:38.343 14:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:38.343 14:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fde15098-3f71-46d7-a13d-38a8f884ea2e 00:07:38.601 request: 00:07:38.601 { 00:07:38.601 "uuid": "fde15098-3f71-46d7-a13d-38a8f884ea2e", 00:07:38.601 "method": "bdev_lvol_get_lvstores", 00:07:38.601 "req_id": 1 00:07:38.601 } 00:07:38.601 Got JSON-RPC error response 00:07:38.601 response: 00:07:38.601 { 00:07:38.601 "code": -19, 00:07:38.601 "message": "No such device" 00:07:38.601 } 00:07:38.601 14:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:07:38.601 14:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:38.602 14:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:38.602 14:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:38.602 14:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:38.860 aio_bdev 00:07:38.860 14:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 203ffd67-ce3b-45c0-b4ed-1c88166bcda6 00:07:38.860 14:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=203ffd67-ce3b-45c0-b4ed-1c88166bcda6 00:07:38.860 14:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:38.860 14:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:38.860 14:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:38.860 14:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:38.860 14:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:39.117 14:44:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 203ffd67-ce3b-45c0-b4ed-1c88166bcda6 -t 2000 00:07:39.376 [ 00:07:39.376 { 00:07:39.376 "name": "203ffd67-ce3b-45c0-b4ed-1c88166bcda6", 00:07:39.376 "aliases": [ 00:07:39.376 "lvs/lvol" 00:07:39.376 ], 00:07:39.376 "product_name": "Logical Volume", 00:07:39.376 "block_size": 4096, 00:07:39.376 "num_blocks": 38912, 00:07:39.376 "uuid": "203ffd67-ce3b-45c0-b4ed-1c88166bcda6", 00:07:39.376 "assigned_rate_limits": { 00:07:39.376 "rw_ios_per_sec": 0, 00:07:39.376 "rw_mbytes_per_sec": 0, 00:07:39.376 "r_mbytes_per_sec": 0, 00:07:39.376 "w_mbytes_per_sec": 0 00:07:39.376 }, 00:07:39.376 "claimed": false, 00:07:39.376 "zoned": false, 00:07:39.376 "supported_io_types": { 00:07:39.376 "read": true, 00:07:39.376 "write": true, 00:07:39.376 "unmap": true, 00:07:39.376 "flush": false, 00:07:39.376 "reset": true, 00:07:39.376 "nvme_admin": false, 00:07:39.376 "nvme_io": false, 00:07:39.376 "nvme_io_md": false, 00:07:39.376 "write_zeroes": true, 00:07:39.376 "zcopy": false, 00:07:39.376 "get_zone_info": false, 00:07:39.376 "zone_management": false, 00:07:39.376 "zone_append": false, 00:07:39.376 "compare": false, 00:07:39.376 "compare_and_write": false, 00:07:39.376 "abort": false, 00:07:39.376 "seek_hole": true, 00:07:39.376 "seek_data": true, 00:07:39.376 "copy": false, 00:07:39.376 "nvme_iov_md": false 00:07:39.376 }, 00:07:39.376 "driver_specific": { 00:07:39.376 "lvol": { 00:07:39.376 "lvol_store_uuid": "fde15098-3f71-46d7-a13d-38a8f884ea2e", 00:07:39.376 "base_bdev": "aio_bdev", 00:07:39.376 "thin_provision": false, 00:07:39.376 "num_allocated_clusters": 38, 00:07:39.376 "snapshot": false, 00:07:39.376 "clone": false, 00:07:39.376 "esnap_clone": false 00:07:39.376 } 00:07:39.376 } 00:07:39.376 } 00:07:39.376 ] 00:07:39.376 14:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:39.376 14:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fde15098-3f71-46d7-a13d-38a8f884ea2e 00:07:39.376 14:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:39.942 14:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:39.942 14:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fde15098-3f71-46d7-a13d-38a8f884ea2e 00:07:39.942 14:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:39.942 14:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:39.942 14:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 203ffd67-ce3b-45c0-b4ed-1c88166bcda6 00:07:40.200 14:44:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u fde15098-3f71-46d7-a13d-38a8f884ea2e 00:07:40.766 14:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:40.766 14:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:40.766 00:07:40.766 real 0m19.402s 00:07:40.766 user 0m49.248s 00:07:40.766 sys 0m4.371s 00:07:40.766 14:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:40.766 14:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:40.766 ************************************ 00:07:40.766 END TEST lvs_grow_dirty 00:07:40.766 ************************************ 00:07:41.025 14:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:07:41.025 14:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:07:41.025 14:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:07:41.025 14:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:07:41.025 14:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:07:41.025 14:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:07:41.025 14:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:07:41.025 14:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:07:41.025 14:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:07:41.025 nvmf_trace.0 00:07:41.025 14:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:07:41.025 14:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:07:41.025 14:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:41.025 14:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:07:41.025 14:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:41.025 14:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:07:41.025 14:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:41.025 14:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:41.025 rmmod nvme_tcp 00:07:41.025 rmmod nvme_fabrics 00:07:41.025 rmmod nvme_keyring 00:07:41.025 14:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:41.025 14:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:07:41.025 14:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:07:41.025 14:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 579601 ']' 00:07:41.025 14:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 579601 00:07:41.025 14:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 579601 ']' 00:07:41.025 14:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 579601 00:07:41.025 14:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:07:41.025 14:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:41.025 14:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 579601 00:07:41.025 14:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:41.025 14:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:41.025 14:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 579601' 00:07:41.025 killing process with pid 579601 00:07:41.025 14:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 579601 00:07:41.025 14:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 579601 00:07:41.283 14:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:41.283 14:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:41.283 14:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:41.283 14:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:07:41.283 14:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:07:41.283 14:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:41.283 14:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:07:41.283 14:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:41.283 14:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:41.283 14:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:41.283 14:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:41.283 14:44:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:43.190 14:44:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:43.190 00:07:43.190 real 0m42.600s 00:07:43.190 user 1m12.577s 00:07:43.190 sys 0m8.167s 00:07:43.190 14:44:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:43.190 14:44:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:43.190 ************************************ 00:07:43.190 END TEST nvmf_lvs_grow 00:07:43.190 ************************************ 00:07:43.449 14:44:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:43.449 14:44:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:43.449 14:44:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:43.449 14:44:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:43.449 ************************************ 00:07:43.449 START TEST nvmf_bdev_io_wait 00:07:43.449 ************************************ 00:07:43.449 14:44:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:43.449 * Looking for test storage... 00:07:43.449 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:43.449 14:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:43.449 14:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:07:43.449 14:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:43.449 14:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:43.449 14:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:43.449 14:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:43.449 14:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:43.449 14:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:07:43.449 14:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:07:43.449 14:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:07:43.450 14:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:07:43.450 14:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:07:43.450 14:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:07:43.450 14:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:07:43.450 14:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:43.450 14:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:07:43.450 14:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:07:43.450 14:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:43.450 14:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:43.450 14:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:07:43.450 14:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:07:43.450 14:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:43.450 14:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:07:43.450 14:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:07:43.450 14:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:07:43.450 14:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:07:43.450 14:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:43.450 14:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:07:43.450 14:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:07:43.450 14:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:43.450 14:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:43.450 14:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:07:43.450 14:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:43.450 14:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:43.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.450 --rc genhtml_branch_coverage=1 00:07:43.450 --rc genhtml_function_coverage=1 00:07:43.450 --rc genhtml_legend=1 00:07:43.450 --rc geninfo_all_blocks=1 00:07:43.450 --rc geninfo_unexecuted_blocks=1 00:07:43.450 00:07:43.450 ' 00:07:43.450 14:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:43.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.450 --rc genhtml_branch_coverage=1 00:07:43.450 --rc genhtml_function_coverage=1 00:07:43.450 --rc genhtml_legend=1 00:07:43.450 --rc geninfo_all_blocks=1 00:07:43.450 --rc geninfo_unexecuted_blocks=1 00:07:43.450 00:07:43.450 ' 00:07:43.450 14:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:43.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.450 --rc genhtml_branch_coverage=1 00:07:43.450 --rc genhtml_function_coverage=1 00:07:43.450 --rc genhtml_legend=1 00:07:43.450 --rc geninfo_all_blocks=1 00:07:43.450 --rc geninfo_unexecuted_blocks=1 00:07:43.450 00:07:43.450 ' 00:07:43.450 14:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:43.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.450 --rc genhtml_branch_coverage=1 00:07:43.450 --rc genhtml_function_coverage=1 00:07:43.450 --rc genhtml_legend=1 00:07:43.450 --rc geninfo_all_blocks=1 00:07:43.450 --rc geninfo_unexecuted_blocks=1 00:07:43.450 00:07:43.450 ' 00:07:43.450 14:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:43.450 14:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:07:43.450 14:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:43.450 14:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:43.450 14:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:43.450 14:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:43.450 14:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:43.450 14:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:43.450 14:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:43.450 14:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:43.450 14:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:43.450 14:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:43.450 14:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:43.450 14:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:43.450 14:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:43.450 14:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:43.450 14:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:43.450 14:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:43.450 14:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:43.450 14:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:07:43.450 14:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:43.450 14:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:43.450 14:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:43.450 14:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.450 14:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.450 14:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.450 14:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:07:43.450 14:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.450 14:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:07:43.450 14:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:43.450 14:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:43.450 14:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:43.450 14:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:43.450 14:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:43.450 14:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:43.450 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:43.450 14:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:43.450 14:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:43.450 14:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:43.450 14:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:43.450 14:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:43.450 14:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:07:43.450 14:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:43.450 14:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:43.450 14:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:43.450 14:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:43.450 14:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:43.450 14:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:43.450 14:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:43.450 14:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:43.450 14:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:43.450 14:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:43.450 14:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:07:43.450 14:44:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:45.988 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:45.988 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:07:45.988 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:45.988 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:45.988 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:45.988 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:45.988 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:45.988 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:07:45.988 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:45.988 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:07:45.988 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:07:45.988 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:07:45.988 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:07:45.988 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:07:45.988 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:07:45.988 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:45.988 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:45.988 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:45.988 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:45.988 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:45.988 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:45.988 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:45.988 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:45.988 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:45.988 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:45.988 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:45.988 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:45.988 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:45.988 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:45.988 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:45.988 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:45.988 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:45.988 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:45.988 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:45.988 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:45.988 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:45.988 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:45.988 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:45.988 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:45.988 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:45.988 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:45.988 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:45.988 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:45.988 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:45.988 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:45.988 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:45.988 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:45.988 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:45.988 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:45.988 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:45.988 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:45.988 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:45.988 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:45.988 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:45.988 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:45.988 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:45.988 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:45.988 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:45.988 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:45.988 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:45.988 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:45.988 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:45.988 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:45.988 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:45.988 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:45.988 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:45.988 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:45.988 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:45.988 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:45.988 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:45.988 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:45.988 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:45.988 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:45.988 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:07:45.988 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:45.988 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:45.988 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:45.988 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:45.988 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:45.988 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:45.988 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:45.988 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:45.988 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:45.988 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:45.988 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:45.988 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:45.988 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:45.988 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:45.988 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:45.988 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:45.988 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:45.989 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:45.989 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:45.989 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:45.989 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:45.989 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:45.989 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:45.989 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:45.989 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:45.989 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:45.989 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:45.989 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.305 ms 00:07:45.989 00:07:45.989 --- 10.0.0.2 ping statistics --- 00:07:45.989 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:45.989 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:07:45.989 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:45.989 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:45.989 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:07:45.989 00:07:45.989 --- 10.0.0.1 ping statistics --- 00:07:45.989 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:45.989 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:07:45.989 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:45.989 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:07:45.989 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:45.989 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:45.989 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:45.989 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:45.989 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:45.989 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:45.989 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:45.989 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:07:45.989 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:45.989 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:45.989 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:45.989 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=582253 00:07:45.989 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:07:45.989 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 582253 00:07:45.989 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 582253 ']' 00:07:45.989 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:45.989 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:45.989 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:45.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:45.989 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:45.989 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:45.989 [2024-12-11 14:44:28.577701] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:07:45.989 [2024-12-11 14:44:28.577780] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:45.989 [2024-12-11 14:44:28.651014] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:45.989 [2024-12-11 14:44:28.712687] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:45.989 [2024-12-11 14:44:28.712748] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:45.989 [2024-12-11 14:44:28.712766] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:45.989 [2024-12-11 14:44:28.712778] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:45.989 [2024-12-11 14:44:28.712787] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:45.989 [2024-12-11 14:44:28.714460] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:45.989 [2024-12-11 14:44:28.714580] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:07:45.989 [2024-12-11 14:44:28.714648] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:07:45.989 [2024-12-11 14:44:28.714652] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.248 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:46.248 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:07:46.248 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:46.248 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:46.248 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:46.248 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:46.248 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:07:46.248 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.248 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:46.248 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.248 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:07:46.248 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.248 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:46.248 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.248 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:46.248 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.248 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:46.248 [2024-12-11 14:44:28.920211] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:46.248 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.248 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:46.248 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.248 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:46.248 Malloc0 00:07:46.248 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.248 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:46.248 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.248 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:46.248 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.248 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:46.248 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.248 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:46.248 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.248 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:46.248 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.248 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:46.248 [2024-12-11 14:44:28.973600] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:46.248 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.248 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=582284 00:07:46.248 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=582286 00:07:46.248 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:07:46.248 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:07:46.248 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:46.248 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:46.248 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:46.248 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=582288 00:07:46.248 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:46.248 { 00:07:46.248 "params": { 00:07:46.248 "name": "Nvme$subsystem", 00:07:46.248 "trtype": "$TEST_TRANSPORT", 00:07:46.248 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:46.248 "adrfam": "ipv4", 00:07:46.248 "trsvcid": "$NVMF_PORT", 00:07:46.248 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:46.248 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:46.248 "hdgst": ${hdgst:-false}, 00:07:46.248 "ddgst": ${ddgst:-false} 00:07:46.248 }, 00:07:46.248 "method": "bdev_nvme_attach_controller" 00:07:46.248 } 00:07:46.248 EOF 00:07:46.248 )") 00:07:46.248 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:07:46.248 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:07:46.248 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:46.248 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:46.248 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:46.248 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=582290 00:07:46.248 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:07:46.248 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:07:46.248 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:46.248 { 00:07:46.248 "params": { 00:07:46.248 "name": "Nvme$subsystem", 00:07:46.249 "trtype": "$TEST_TRANSPORT", 00:07:46.249 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:46.249 "adrfam": "ipv4", 00:07:46.249 "trsvcid": "$NVMF_PORT", 00:07:46.249 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:46.249 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:46.249 "hdgst": ${hdgst:-false}, 00:07:46.249 "ddgst": ${ddgst:-false} 00:07:46.249 }, 00:07:46.249 "method": "bdev_nvme_attach_controller" 00:07:46.249 } 00:07:46.249 EOF 00:07:46.249 )") 00:07:46.249 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:07:46.249 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:46.249 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:46.249 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:46.249 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:07:46.249 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:07:46.249 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:46.249 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:46.249 { 00:07:46.249 "params": { 00:07:46.249 "name": "Nvme$subsystem", 00:07:46.249 "trtype": "$TEST_TRANSPORT", 00:07:46.249 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:46.249 "adrfam": "ipv4", 00:07:46.249 "trsvcid": "$NVMF_PORT", 00:07:46.249 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:46.249 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:46.249 "hdgst": ${hdgst:-false}, 00:07:46.249 "ddgst": ${ddgst:-false} 00:07:46.249 }, 00:07:46.249 "method": "bdev_nvme_attach_controller" 00:07:46.249 } 00:07:46.249 EOF 00:07:46.249 )") 00:07:46.249 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:46.249 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:46.249 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:46.249 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:46.249 { 00:07:46.249 "params": { 00:07:46.249 "name": "Nvme$subsystem", 00:07:46.249 "trtype": "$TEST_TRANSPORT", 00:07:46.249 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:46.249 "adrfam": "ipv4", 00:07:46.249 "trsvcid": "$NVMF_PORT", 00:07:46.249 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:46.249 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:46.249 "hdgst": ${hdgst:-false}, 00:07:46.249 "ddgst": ${ddgst:-false} 00:07:46.249 }, 00:07:46.249 "method": "bdev_nvme_attach_controller" 00:07:46.249 } 00:07:46.249 EOF 00:07:46.249 )") 00:07:46.249 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:46.249 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:46.249 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:46.249 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 582284 00:07:46.249 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:46.249 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:46.249 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:46.249 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:46.249 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:46.249 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:46.249 "params": { 00:07:46.249 "name": "Nvme1", 00:07:46.249 "trtype": "tcp", 00:07:46.249 "traddr": "10.0.0.2", 00:07:46.249 "adrfam": "ipv4", 00:07:46.249 "trsvcid": "4420", 00:07:46.249 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:46.249 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:46.249 "hdgst": false, 00:07:46.249 "ddgst": false 00:07:46.249 }, 00:07:46.249 "method": "bdev_nvme_attach_controller" 00:07:46.249 }' 00:07:46.249 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:46.249 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:46.249 "params": { 00:07:46.249 "name": "Nvme1", 00:07:46.249 "trtype": "tcp", 00:07:46.249 "traddr": "10.0.0.2", 00:07:46.249 "adrfam": "ipv4", 00:07:46.249 "trsvcid": "4420", 00:07:46.249 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:46.249 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:46.249 "hdgst": false, 00:07:46.249 "ddgst": false 00:07:46.249 }, 00:07:46.249 "method": "bdev_nvme_attach_controller" 00:07:46.249 }' 00:07:46.249 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:46.249 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:46.249 "params": { 00:07:46.249 "name": "Nvme1", 00:07:46.249 "trtype": "tcp", 00:07:46.249 "traddr": "10.0.0.2", 00:07:46.249 "adrfam": "ipv4", 00:07:46.249 "trsvcid": "4420", 00:07:46.249 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:46.249 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:46.249 "hdgst": false, 00:07:46.249 "ddgst": false 00:07:46.249 }, 00:07:46.249 "method": "bdev_nvme_attach_controller" 00:07:46.249 }' 00:07:46.249 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:46.249 14:44:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:46.249 "params": { 00:07:46.249 "name": "Nvme1", 00:07:46.249 "trtype": "tcp", 00:07:46.249 "traddr": "10.0.0.2", 00:07:46.249 "adrfam": "ipv4", 00:07:46.249 "trsvcid": "4420", 00:07:46.249 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:46.249 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:46.249 "hdgst": false, 00:07:46.249 "ddgst": false 00:07:46.249 }, 00:07:46.249 "method": "bdev_nvme_attach_controller" 00:07:46.249 }' 00:07:46.507 [2024-12-11 14:44:29.025316] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:07:46.507 [2024-12-11 14:44:29.025316] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:07:46.507 [2024-12-11 14:44:29.025317] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:07:46.507 [2024-12-11 14:44:29.025418] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-12-11 14:44:29.025418] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-12-11 14:44:29.025419] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:07:46.507 --proc-type=auto ] 00:07:46.507 --proc-type=auto ] 00:07:46.507 [2024-12-11 14:44:29.025584] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:07:46.507 [2024-12-11 14:44:29.025655] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:07:46.507 [2024-12-11 14:44:29.210105] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.507 [2024-12-11 14:44:29.263961] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:07:46.766 [2024-12-11 14:44:29.311689] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.766 [2024-12-11 14:44:29.366043] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:07:46.766 [2024-12-11 14:44:29.412096] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.766 [2024-12-11 14:44:29.466515] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:07:46.766 [2024-12-11 14:44:29.482948] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.766 [2024-12-11 14:44:29.534182] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7 00:07:47.024 Running I/O for 1 seconds... 00:07:47.024 Running I/O for 1 seconds... 00:07:47.024 Running I/O for 1 seconds... 00:07:47.282 Running I/O for 1 seconds... 00:07:47.850 11157.00 IOPS, 43.58 MiB/s 00:07:47.850 Latency(us) 00:07:47.850 [2024-12-11T13:44:30.623Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:47.850 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:07:47.850 Nvme1n1 : 1.01 11216.32 43.81 0.00 0.00 11367.81 5388.52 20971.52 00:07:47.850 [2024-12-11T13:44:30.623Z] =================================================================================================================== 00:07:47.850 [2024-12-11T13:44:30.623Z] Total : 11216.32 43.81 0.00 0.00 11367.81 5388.52 20971.52 00:07:48.108 5041.00 IOPS, 19.69 MiB/s 00:07:48.108 Latency(us) 00:07:48.108 [2024-12-11T13:44:30.881Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:48.108 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:07:48.108 Nvme1n1 : 1.02 5066.56 19.79 0.00 0.00 25020.67 9903.22 41943.04 00:07:48.108 [2024-12-11T13:44:30.881Z] =================================================================================================================== 00:07:48.108 [2024-12-11T13:44:30.881Z] Total : 5066.56 19.79 0.00 0.00 25020.67 9903.22 41943.04 00:07:48.108 188232.00 IOPS, 735.28 MiB/s 00:07:48.108 Latency(us) 00:07:48.108 [2024-12-11T13:44:30.881Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:48.108 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:07:48.108 Nvme1n1 : 1.00 187877.58 733.90 0.00 0.00 677.54 292.79 1856.85 00:07:48.108 [2024-12-11T13:44:30.881Z] =================================================================================================================== 00:07:48.108 [2024-12-11T13:44:30.881Z] Total : 187877.58 733.90 0.00 0.00 677.54 292.79 1856.85 00:07:48.108 14:44:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 582286 00:07:48.108 5521.00 IOPS, 21.57 MiB/s 00:07:48.108 Latency(us) 00:07:48.108 [2024-12-11T13:44:30.881Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:48.108 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:07:48.108 Nvme1n1 : 1.01 5622.68 21.96 0.00 0.00 22684.66 4757.43 50875.35 00:07:48.108 [2024-12-11T13:44:30.881Z] =================================================================================================================== 00:07:48.108 [2024-12-11T13:44:30.881Z] Total : 5622.68 21.96 0.00 0.00 22684.66 4757.43 50875.35 00:07:48.108 14:44:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 582288 00:07:48.366 14:44:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 582290 00:07:48.366 14:44:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:48.366 14:44:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.366 14:44:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:48.366 14:44:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.366 14:44:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:07:48.366 14:44:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:07:48.366 14:44:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:48.366 14:44:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:07:48.366 14:44:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:48.366 14:44:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:07:48.366 14:44:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:48.366 14:44:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:48.366 rmmod nvme_tcp 00:07:48.366 rmmod nvme_fabrics 00:07:48.366 rmmod nvme_keyring 00:07:48.366 14:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:48.367 14:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:07:48.367 14:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:07:48.367 14:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 582253 ']' 00:07:48.367 14:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 582253 00:07:48.367 14:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 582253 ']' 00:07:48.367 14:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 582253 00:07:48.367 14:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:07:48.367 14:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:48.367 14:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 582253 00:07:48.367 14:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:48.367 14:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:48.367 14:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 582253' 00:07:48.367 killing process with pid 582253 00:07:48.367 14:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 582253 00:07:48.367 14:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 582253 00:07:48.627 14:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:48.627 14:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:48.627 14:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:48.627 14:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:07:48.627 14:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:07:48.627 14:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:48.627 14:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:07:48.627 14:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:48.627 14:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:48.627 14:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:48.627 14:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:48.627 14:44:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:51.192 14:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:51.192 00:07:51.192 real 0m7.347s 00:07:51.192 user 0m16.329s 00:07:51.192 sys 0m3.508s 00:07:51.192 14:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:51.192 14:44:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:51.192 ************************************ 00:07:51.192 END TEST nvmf_bdev_io_wait 00:07:51.192 ************************************ 00:07:51.192 14:44:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:51.192 14:44:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:51.192 14:44:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:51.192 14:44:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:51.192 ************************************ 00:07:51.192 START TEST nvmf_queue_depth 00:07:51.192 ************************************ 00:07:51.192 14:44:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:51.192 * Looking for test storage... 00:07:51.192 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:51.192 14:44:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:51.192 14:44:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:07:51.192 14:44:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:51.192 14:44:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:51.192 14:44:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:51.192 14:44:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:51.192 14:44:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:51.192 14:44:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:07:51.192 14:44:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:07:51.192 14:44:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:07:51.192 14:44:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:07:51.192 14:44:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:07:51.192 14:44:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:07:51.192 14:44:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:07:51.192 14:44:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:51.192 14:44:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:07:51.192 14:44:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:07:51.192 14:44:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:51.192 14:44:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:51.192 14:44:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:07:51.192 14:44:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:07:51.192 14:44:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:51.192 14:44:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:07:51.192 14:44:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:07:51.192 14:44:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:07:51.192 14:44:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:07:51.192 14:44:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:51.192 14:44:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:07:51.192 14:44:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:07:51.192 14:44:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:51.192 14:44:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:51.192 14:44:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:07:51.192 14:44:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:51.192 14:44:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:51.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.192 --rc genhtml_branch_coverage=1 00:07:51.192 --rc genhtml_function_coverage=1 00:07:51.192 --rc genhtml_legend=1 00:07:51.192 --rc geninfo_all_blocks=1 00:07:51.192 --rc geninfo_unexecuted_blocks=1 00:07:51.192 00:07:51.192 ' 00:07:51.192 14:44:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:51.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.192 --rc genhtml_branch_coverage=1 00:07:51.192 --rc genhtml_function_coverage=1 00:07:51.192 --rc genhtml_legend=1 00:07:51.192 --rc geninfo_all_blocks=1 00:07:51.192 --rc geninfo_unexecuted_blocks=1 00:07:51.192 00:07:51.192 ' 00:07:51.192 14:44:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:51.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.192 --rc genhtml_branch_coverage=1 00:07:51.192 --rc genhtml_function_coverage=1 00:07:51.192 --rc genhtml_legend=1 00:07:51.192 --rc geninfo_all_blocks=1 00:07:51.192 --rc geninfo_unexecuted_blocks=1 00:07:51.192 00:07:51.192 ' 00:07:51.192 14:44:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:51.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.192 --rc genhtml_branch_coverage=1 00:07:51.192 --rc genhtml_function_coverage=1 00:07:51.192 --rc genhtml_legend=1 00:07:51.192 --rc geninfo_all_blocks=1 00:07:51.192 --rc geninfo_unexecuted_blocks=1 00:07:51.192 00:07:51.192 ' 00:07:51.192 14:44:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:51.192 14:44:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:07:51.192 14:44:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:51.192 14:44:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:51.192 14:44:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:51.192 14:44:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:51.192 14:44:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:51.192 14:44:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:51.192 14:44:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:51.192 14:44:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:51.192 14:44:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:51.192 14:44:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:51.192 14:44:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:51.192 14:44:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:51.192 14:44:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:51.192 14:44:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:51.192 14:44:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:51.192 14:44:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:51.192 14:44:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:51.192 14:44:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:07:51.192 14:44:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:51.192 14:44:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:51.192 14:44:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:51.192 14:44:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.193 14:44:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.193 14:44:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.193 14:44:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:07:51.193 14:44:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.193 14:44:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:07:51.193 14:44:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:51.193 14:44:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:51.193 14:44:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:51.193 14:44:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:51.193 14:44:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:51.193 14:44:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:51.193 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:51.193 14:44:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:51.193 14:44:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:51.193 14:44:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:51.193 14:44:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:07:51.193 14:44:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:07:51.193 14:44:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:51.193 14:44:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:07:51.193 14:44:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:51.193 14:44:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:51.193 14:44:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:51.193 14:44:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:51.193 14:44:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:51.193 14:44:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:51.193 14:44:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:51.193 14:44:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:51.193 14:44:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:51.193 14:44:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:51.193 14:44:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:07:51.193 14:44:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:53.100 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:53.100 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:07:53.100 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:53.100 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:53.100 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:53.100 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:53.100 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:53.100 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:07:53.100 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:53.100 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:07:53.100 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:07:53.100 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:07:53.100 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:07:53.100 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:07:53.100 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:07:53.100 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:53.100 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:53.100 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:53.100 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:53.100 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:53.100 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:53.100 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:53.100 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:53.100 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:53.100 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:53.100 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:53.100 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:53.100 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:53.100 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:53.100 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:53.100 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:53.100 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:53.100 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:53.100 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:53.100 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:53.100 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:53.100 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:53.100 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:53.100 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:53.100 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:53.100 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:53.100 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:53.100 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:53.100 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:53.100 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:53.100 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:53.100 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:53.100 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:53.100 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:53.100 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:53.100 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:53.100 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:53.100 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:53.100 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:53.100 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:53.100 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:53.100 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:53.100 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:53.100 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:53.100 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:53.100 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:53.100 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:53.100 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:53.100 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:53.100 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:53.100 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:53.100 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:53.100 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:53.100 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:53.100 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:53.100 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:53.100 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:53.100 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:53.100 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:07:53.100 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:53.100 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:53.100 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:53.100 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:53.100 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:53.100 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:53.100 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:53.100 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:53.100 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:53.100 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:53.100 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:53.100 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:53.100 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:53.100 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:53.100 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:53.100 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:53.100 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:53.100 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:53.359 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:53.359 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:53.359 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:53.359 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:53.359 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:53.359 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:53.359 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:53.359 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:53.359 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:53.359 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.303 ms 00:07:53.359 00:07:53.359 --- 10.0.0.2 ping statistics --- 00:07:53.359 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:53.359 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:07:53.359 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:53.359 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:53.359 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:07:53.359 00:07:53.359 --- 10.0.0.1 ping statistics --- 00:07:53.359 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:53.359 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:07:53.359 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:53.359 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:07:53.359 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:53.359 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:53.359 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:53.359 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:53.359 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:53.359 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:53.359 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:53.359 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:07:53.359 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:53.359 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:53.359 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:53.359 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=584520 00:07:53.359 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:07:53.359 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 584520 00:07:53.359 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 584520 ']' 00:07:53.359 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:53.359 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:53.359 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:53.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:53.359 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:53.359 14:44:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:53.359 [2024-12-11 14:44:36.040437] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:07:53.359 [2024-12-11 14:44:36.040539] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:53.359 [2024-12-11 14:44:36.119685] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.618 [2024-12-11 14:44:36.178376] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:53.618 [2024-12-11 14:44:36.178434] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:53.618 [2024-12-11 14:44:36.178463] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:53.618 [2024-12-11 14:44:36.178475] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:53.618 [2024-12-11 14:44:36.178485] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:53.618 [2024-12-11 14:44:36.179136] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:53.618 14:44:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:53.618 14:44:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:07:53.618 14:44:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:53.618 14:44:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:53.618 14:44:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:53.618 14:44:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:53.618 14:44:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:53.618 14:44:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.618 14:44:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:53.618 [2024-12-11 14:44:36.324500] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:53.618 14:44:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.618 14:44:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:53.618 14:44:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.618 14:44:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:53.618 Malloc0 00:07:53.618 14:44:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.618 14:44:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:53.618 14:44:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.618 14:44:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:53.618 14:44:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.618 14:44:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:53.618 14:44:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.618 14:44:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:53.618 14:44:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.618 14:44:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:53.618 14:44:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.618 14:44:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:53.618 [2024-12-11 14:44:36.373519] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:53.618 14:44:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.618 14:44:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=584669 00:07:53.618 14:44:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:07:53.618 14:44:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:53.618 14:44:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 584669 /var/tmp/bdevperf.sock 00:07:53.618 14:44:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 584669 ']' 00:07:53.618 14:44:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:53.618 14:44:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:53.618 14:44:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:53.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:53.618 14:44:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:53.618 14:44:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:53.895 [2024-12-11 14:44:36.420988] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:07:53.895 [2024-12-11 14:44:36.421064] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid584669 ] 00:07:53.895 [2024-12-11 14:44:36.487831] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.895 [2024-12-11 14:44:36.544231] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.158 14:44:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:54.158 14:44:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:07:54.158 14:44:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:07:54.158 14:44:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.158 14:44:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:54.158 NVMe0n1 00:07:54.158 14:44:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.158 14:44:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:54.416 Running I/O for 10 seconds... 00:07:56.284 8192.00 IOPS, 32.00 MiB/s [2024-12-11T13:44:40.433Z] 8447.00 IOPS, 33.00 MiB/s [2024-12-11T13:44:41.368Z] 8506.33 IOPS, 33.23 MiB/s [2024-12-11T13:44:42.302Z] 8449.50 IOPS, 33.01 MiB/s [2024-12-11T13:44:43.236Z] 8537.80 IOPS, 33.35 MiB/s [2024-12-11T13:44:44.169Z] 8532.67 IOPS, 33.33 MiB/s [2024-12-11T13:44:45.104Z] 8573.71 IOPS, 33.49 MiB/s [2024-12-11T13:44:46.036Z] 8573.38 IOPS, 33.49 MiB/s [2024-12-11T13:44:47.411Z] 8610.56 IOPS, 33.63 MiB/s [2024-12-11T13:44:47.411Z] 8596.80 IOPS, 33.58 MiB/s 00:08:04.638 Latency(us) 00:08:04.638 [2024-12-11T13:44:47.411Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:04.638 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:04.638 Verification LBA range: start 0x0 length 0x4000 00:08:04.638 NVMe0n1 : 10.06 8645.73 33.77 0.00 0.00 117986.73 11165.39 71458.51 00:08:04.638 [2024-12-11T13:44:47.411Z] =================================================================================================================== 00:08:04.638 [2024-12-11T13:44:47.411Z] Total : 8645.73 33.77 0.00 0.00 117986.73 11165.39 71458.51 00:08:04.638 { 00:08:04.638 "results": [ 00:08:04.638 { 00:08:04.638 "job": "NVMe0n1", 00:08:04.638 "core_mask": "0x1", 00:08:04.638 "workload": "verify", 00:08:04.638 "status": "finished", 00:08:04.638 "verify_range": { 00:08:04.638 "start": 0, 00:08:04.638 "length": 16384 00:08:04.638 }, 00:08:04.638 "queue_depth": 1024, 00:08:04.638 "io_size": 4096, 00:08:04.638 "runtime": 10.060578, 00:08:04.638 "iops": 8645.725921512661, 00:08:04.638 "mibps": 33.772366880908834, 00:08:04.638 "io_failed": 0, 00:08:04.638 "io_timeout": 0, 00:08:04.638 "avg_latency_us": 117986.72628755451, 00:08:04.638 "min_latency_us": 11165.392592592592, 00:08:04.638 "max_latency_us": 71458.5125925926 00:08:04.638 } 00:08:04.638 ], 00:08:04.638 "core_count": 1 00:08:04.638 } 00:08:04.638 14:44:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 584669 00:08:04.638 14:44:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 584669 ']' 00:08:04.638 14:44:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 584669 00:08:04.638 14:44:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:04.638 14:44:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:04.638 14:44:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 584669 00:08:04.638 14:44:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:04.638 14:44:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:04.638 14:44:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 584669' 00:08:04.638 killing process with pid 584669 00:08:04.638 14:44:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 584669 00:08:04.638 Received shutdown signal, test time was about 10.000000 seconds 00:08:04.638 00:08:04.638 Latency(us) 00:08:04.638 [2024-12-11T13:44:47.411Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:04.638 [2024-12-11T13:44:47.411Z] =================================================================================================================== 00:08:04.638 [2024-12-11T13:44:47.411Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:04.638 14:44:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 584669 00:08:04.638 14:44:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:04.638 14:44:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:04.638 14:44:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:04.638 14:44:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:08:04.638 14:44:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:04.638 14:44:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:08:04.638 14:44:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:04.638 14:44:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:04.638 rmmod nvme_tcp 00:08:04.638 rmmod nvme_fabrics 00:08:04.638 rmmod nvme_keyring 00:08:04.896 14:44:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:04.896 14:44:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:08:04.896 14:44:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:08:04.896 14:44:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 584520 ']' 00:08:04.896 14:44:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 584520 00:08:04.896 14:44:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 584520 ']' 00:08:04.896 14:44:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 584520 00:08:04.896 14:44:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:04.896 14:44:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:04.896 14:44:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 584520 00:08:04.896 14:44:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:04.896 14:44:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:04.896 14:44:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 584520' 00:08:04.896 killing process with pid 584520 00:08:04.896 14:44:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 584520 00:08:04.896 14:44:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 584520 00:08:05.156 14:44:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:05.156 14:44:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:05.156 14:44:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:05.156 14:44:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:08:05.156 14:44:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:08:05.156 14:44:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:05.156 14:44:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:08:05.156 14:44:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:05.156 14:44:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:05.156 14:44:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:05.156 14:44:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:05.156 14:44:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:07.061 14:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:07.061 00:08:07.061 real 0m16.354s 00:08:07.061 user 0m22.896s 00:08:07.061 sys 0m3.178s 00:08:07.061 14:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:07.061 14:44:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:07.061 ************************************ 00:08:07.061 END TEST nvmf_queue_depth 00:08:07.061 ************************************ 00:08:07.061 14:44:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:07.061 14:44:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:07.061 14:44:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:07.061 14:44:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:07.061 ************************************ 00:08:07.061 START TEST nvmf_target_multipath 00:08:07.061 ************************************ 00:08:07.061 14:44:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:07.320 * Looking for test storage... 00:08:07.320 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:07.320 14:44:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:07.320 14:44:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:08:07.320 14:44:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:07.320 14:44:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:07.320 14:44:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:07.320 14:44:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:07.320 14:44:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:07.320 14:44:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:08:07.320 14:44:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:08:07.320 14:44:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:08:07.320 14:44:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:08:07.321 14:44:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:08:07.321 14:44:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:08:07.321 14:44:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:08:07.321 14:44:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:07.321 14:44:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:08:07.321 14:44:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:08:07.321 14:44:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:07.321 14:44:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:07.321 14:44:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:08:07.321 14:44:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:08:07.321 14:44:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:07.321 14:44:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:08:07.321 14:44:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:08:07.321 14:44:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:08:07.321 14:44:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:08:07.321 14:44:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:07.321 14:44:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:08:07.321 14:44:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:08:07.321 14:44:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:07.321 14:44:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:07.321 14:44:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:08:07.321 14:44:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:07.321 14:44:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:07.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.321 --rc genhtml_branch_coverage=1 00:08:07.321 --rc genhtml_function_coverage=1 00:08:07.321 --rc genhtml_legend=1 00:08:07.321 --rc geninfo_all_blocks=1 00:08:07.321 --rc geninfo_unexecuted_blocks=1 00:08:07.321 00:08:07.321 ' 00:08:07.321 14:44:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:07.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.321 --rc genhtml_branch_coverage=1 00:08:07.321 --rc genhtml_function_coverage=1 00:08:07.321 --rc genhtml_legend=1 00:08:07.321 --rc geninfo_all_blocks=1 00:08:07.321 --rc geninfo_unexecuted_blocks=1 00:08:07.321 00:08:07.321 ' 00:08:07.321 14:44:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:07.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.321 --rc genhtml_branch_coverage=1 00:08:07.321 --rc genhtml_function_coverage=1 00:08:07.321 --rc genhtml_legend=1 00:08:07.321 --rc geninfo_all_blocks=1 00:08:07.321 --rc geninfo_unexecuted_blocks=1 00:08:07.321 00:08:07.321 ' 00:08:07.321 14:44:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:07.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.321 --rc genhtml_branch_coverage=1 00:08:07.321 --rc genhtml_function_coverage=1 00:08:07.321 --rc genhtml_legend=1 00:08:07.321 --rc geninfo_all_blocks=1 00:08:07.321 --rc geninfo_unexecuted_blocks=1 00:08:07.321 00:08:07.321 ' 00:08:07.321 14:44:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:07.321 14:44:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:07.321 14:44:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:07.321 14:44:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:07.321 14:44:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:07.321 14:44:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:07.321 14:44:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:07.321 14:44:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:07.321 14:44:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:07.321 14:44:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:07.321 14:44:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:07.321 14:44:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:07.321 14:44:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:07.321 14:44:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:07.321 14:44:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:07.321 14:44:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:07.321 14:44:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:07.321 14:44:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:07.321 14:44:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:07.321 14:44:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:08:07.321 14:44:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:07.321 14:44:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:07.321 14:44:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:07.321 14:44:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.321 14:44:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.321 14:44:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.321 14:44:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:07.321 14:44:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.321 14:44:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:08:07.321 14:44:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:07.321 14:44:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:07.321 14:44:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:07.321 14:44:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:07.321 14:44:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:07.321 14:44:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:07.321 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:07.321 14:44:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:07.321 14:44:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:07.321 14:44:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:07.321 14:44:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:07.321 14:44:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:07.321 14:44:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:07.321 14:44:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:07.321 14:44:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:07.321 14:44:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:07.321 14:44:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:07.321 14:44:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:07.321 14:44:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:07.321 14:44:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:07.321 14:44:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:07.321 14:44:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:07.321 14:44:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:07.322 14:44:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:07.322 14:44:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:07.322 14:44:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:08:07.322 14:44:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:09.857 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:09.858 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:08:09.858 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:09.858 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:09.858 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:09.858 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:09.858 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:09.858 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:08:09.858 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:09.858 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:08:09.858 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:08:09.858 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:08:09.858 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:08:09.858 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:08:09.858 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:08:09.858 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:09.858 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:09.858 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:09.858 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:09.858 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:09.858 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:09.858 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:09.858 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:09.858 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:09.858 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:09.858 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:09.858 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:09.858 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:09.858 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:09.858 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:09.858 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:09.858 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:09.858 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:09.858 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:09.858 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:09.858 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:09.858 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:09.858 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:09.858 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:09.858 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:09.858 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:09.858 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:09.858 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:09.858 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:09.858 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:09.858 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:09.858 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:09.858 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:09.858 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:09.858 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:09.858 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:09.858 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:09.858 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:09.858 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:09.858 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:09.858 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:09.858 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:09.858 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:09.858 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:09.858 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:09.858 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:09.858 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:09.858 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:09.858 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:09.858 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:09.858 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:09.858 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:09.858 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:09.858 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:09.858 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:09.858 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:09.858 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:09.858 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:09.858 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:08:09.858 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:09.858 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:09.858 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:09.858 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:09.858 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:09.858 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:09.858 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:09.858 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:09.858 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:09.858 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:09.858 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:09.858 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:09.858 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:09.858 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:09.858 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:09.858 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:09.858 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:09.858 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:09.858 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:09.858 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:09.858 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:09.858 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:09.858 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:09.858 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:09.858 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:09.858 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:09.858 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:09.858 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.227 ms 00:08:09.858 00:08:09.858 --- 10.0.0.2 ping statistics --- 00:08:09.858 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:09.858 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:08:09.858 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:09.858 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:09.858 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:08:09.858 00:08:09.858 --- 10.0.0.1 ping statistics --- 00:08:09.858 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:09.858 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:08:09.858 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:09.859 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:08:09.859 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:09.859 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:09.859 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:09.859 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:09.859 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:09.859 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:09.859 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:09.859 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:08:09.859 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:08:09.859 only one NIC for nvmf test 00:08:09.859 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:08:09.859 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:09.859 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:09.859 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:09.859 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:09.859 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:09.859 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:09.859 rmmod nvme_tcp 00:08:09.859 rmmod nvme_fabrics 00:08:09.859 rmmod nvme_keyring 00:08:09.859 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:09.859 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:09.859 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:09.859 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:09.859 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:09.859 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:09.859 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:09.859 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:09.859 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:09.859 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:09.859 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:09.859 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:09.859 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:09.859 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:09.859 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:09.859 14:44:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:11.768 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:11.768 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:08:11.768 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:08:11.768 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:11.768 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:11.768 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:11.768 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:11.768 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:11.768 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:11.768 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:11.768 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:11.768 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:11.768 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:12.027 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:12.027 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:12.027 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:12.027 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:12.027 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:12.027 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:12.027 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:12.027 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:12.027 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:12.027 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:12.027 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:12.027 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:12.027 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:12.027 00:08:12.027 real 0m4.748s 00:08:12.027 user 0m0.959s 00:08:12.027 sys 0m1.674s 00:08:12.027 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:12.027 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:12.027 ************************************ 00:08:12.027 END TEST nvmf_target_multipath 00:08:12.027 ************************************ 00:08:12.027 14:44:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:12.027 14:44:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:12.027 14:44:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:12.027 14:44:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:12.027 ************************************ 00:08:12.027 START TEST nvmf_zcopy 00:08:12.027 ************************************ 00:08:12.027 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:12.027 * Looking for test storage... 00:08:12.027 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:12.027 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:12.027 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:08:12.027 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:12.027 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:12.027 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:12.027 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:12.027 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:12.027 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:08:12.027 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:08:12.027 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:08:12.027 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:08:12.027 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:08:12.027 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:08:12.027 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:08:12.027 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:12.027 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:08:12.027 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:08:12.027 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:12.027 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:12.028 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:08:12.028 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:08:12.028 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:12.028 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:08:12.028 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:08:12.028 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:08:12.028 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:08:12.028 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:12.028 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:08:12.028 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:08:12.028 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:12.028 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:12.028 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:08:12.028 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:12.028 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:12.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.028 --rc genhtml_branch_coverage=1 00:08:12.028 --rc genhtml_function_coverage=1 00:08:12.028 --rc genhtml_legend=1 00:08:12.028 --rc geninfo_all_blocks=1 00:08:12.028 --rc geninfo_unexecuted_blocks=1 00:08:12.028 00:08:12.028 ' 00:08:12.028 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:12.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.028 --rc genhtml_branch_coverage=1 00:08:12.028 --rc genhtml_function_coverage=1 00:08:12.028 --rc genhtml_legend=1 00:08:12.028 --rc geninfo_all_blocks=1 00:08:12.028 --rc geninfo_unexecuted_blocks=1 00:08:12.028 00:08:12.028 ' 00:08:12.028 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:12.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.028 --rc genhtml_branch_coverage=1 00:08:12.028 --rc genhtml_function_coverage=1 00:08:12.028 --rc genhtml_legend=1 00:08:12.028 --rc geninfo_all_blocks=1 00:08:12.028 --rc geninfo_unexecuted_blocks=1 00:08:12.028 00:08:12.028 ' 00:08:12.028 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:12.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.028 --rc genhtml_branch_coverage=1 00:08:12.028 --rc genhtml_function_coverage=1 00:08:12.028 --rc genhtml_legend=1 00:08:12.028 --rc geninfo_all_blocks=1 00:08:12.028 --rc geninfo_unexecuted_blocks=1 00:08:12.028 00:08:12.028 ' 00:08:12.028 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:12.028 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:12.028 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:12.028 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:12.028 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:12.028 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:12.028 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:12.028 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:12.028 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:12.028 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:12.028 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:12.028 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:12.028 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:12.028 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:12.028 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:12.028 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:12.028 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:12.028 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:12.028 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:12.028 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:08:12.028 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:12.028 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:12.028 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:12.028 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.028 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.028 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.028 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:12.028 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.028 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:08:12.028 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:12.028 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:12.028 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:12.028 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:12.028 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:12.028 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:12.028 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:12.028 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:12.028 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:12.028 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:12.028 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:12.028 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:12.028 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:12.028 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:12.028 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:12.028 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:12.028 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:12.028 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:12.028 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:12.028 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:12.028 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:12.028 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:08:12.028 14:44:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:14.559 14:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:14.559 14:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:08:14.559 14:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:14.559 14:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:14.559 14:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:14.559 14:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:14.559 14:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:14.559 14:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:08:14.559 14:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:14.559 14:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:08:14.559 14:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:08:14.559 14:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:08:14.559 14:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:08:14.559 14:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:08:14.559 14:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:08:14.559 14:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:14.559 14:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:14.559 14:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:14.559 14:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:14.559 14:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:14.560 14:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:14.560 14:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:14.560 14:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:14.560 14:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:14.560 14:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:14.560 14:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:14.560 14:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:14.560 14:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:14.560 14:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:14.560 14:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:14.560 14:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:14.560 14:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:14.560 14:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:14.560 14:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:14.560 14:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:14.560 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:14.560 14:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:14.560 14:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:14.560 14:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:14.560 14:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:14.560 14:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:14.560 14:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:14.560 14:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:14.560 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:14.560 14:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:14.560 14:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:14.560 14:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:14.560 14:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:14.560 14:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:14.560 14:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:14.560 14:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:14.560 14:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:14.560 14:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:14.560 14:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:14.560 14:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:14.560 14:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:14.560 14:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:14.560 14:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:14.560 14:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:14.560 14:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:14.560 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:14.560 14:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:14.560 14:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:14.560 14:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:14.560 14:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:14.560 14:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:14.560 14:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:14.560 14:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:14.560 14:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:14.560 14:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:14.560 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:14.560 14:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:14.560 14:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:14.560 14:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:08:14.560 14:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:14.560 14:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:14.560 14:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:14.560 14:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:14.560 14:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:14.560 14:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:14.560 14:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:14.560 14:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:14.560 14:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:14.560 14:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:14.560 14:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:14.560 14:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:14.560 14:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:14.560 14:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:14.560 14:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:14.560 14:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:14.560 14:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:14.560 14:44:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:14.560 14:44:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:14.560 14:44:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:14.560 14:44:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:14.560 14:44:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:14.560 14:44:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:14.560 14:44:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:14.560 14:44:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:14.560 14:44:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:14.560 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:14.560 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.376 ms 00:08:14.560 00:08:14.560 --- 10.0.0.2 ping statistics --- 00:08:14.560 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:14.560 rtt min/avg/max/mdev = 0.376/0.376/0.376/0.000 ms 00:08:14.560 14:44:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:14.560 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:14.560 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:08:14.560 00:08:14.560 --- 10.0.0.1 ping statistics --- 00:08:14.560 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:14.560 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:08:14.560 14:44:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:14.560 14:44:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:08:14.560 14:44:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:14.560 14:44:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:14.560 14:44:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:14.560 14:44:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:14.560 14:44:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:14.560 14:44:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:14.560 14:44:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:14.560 14:44:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:14.560 14:44:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:14.560 14:44:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:14.560 14:44:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:14.560 14:44:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=589879 00:08:14.560 14:44:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:14.560 14:44:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 589879 00:08:14.560 14:44:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 589879 ']' 00:08:14.560 14:44:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:14.560 14:44:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:14.560 14:44:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:14.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:14.560 14:44:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:14.560 14:44:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:14.560 [2024-12-11 14:44:57.164032] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:08:14.561 [2024-12-11 14:44:57.164134] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:14.561 [2024-12-11 14:44:57.239581] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.561 [2024-12-11 14:44:57.297690] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:14.561 [2024-12-11 14:44:57.297752] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:14.561 [2024-12-11 14:44:57.297781] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:14.561 [2024-12-11 14:44:57.297793] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:14.561 [2024-12-11 14:44:57.297803] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:14.561 [2024-12-11 14:44:57.298450] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:14.819 14:44:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:14.819 14:44:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:08:14.819 14:44:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:14.819 14:44:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:14.819 14:44:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:14.819 14:44:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:14.819 14:44:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:14.819 14:44:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:14.819 14:44:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.819 14:44:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:14.819 [2024-12-11 14:44:57.449018] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:14.819 14:44:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.819 14:44:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:14.819 14:44:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.819 14:44:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:14.819 14:44:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.819 14:44:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:14.819 14:44:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.819 14:44:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:14.819 [2024-12-11 14:44:57.465222] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:14.819 14:44:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.819 14:44:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:14.819 14:44:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.819 14:44:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:14.819 14:44:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.819 14:44:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:14.819 14:44:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.819 14:44:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:14.819 malloc0 00:08:14.819 14:44:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.819 14:44:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:14.819 14:44:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.819 14:44:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:14.819 14:44:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.819 14:44:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:14.819 14:44:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:14.819 14:44:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:14.819 14:44:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:14.819 14:44:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:14.819 14:44:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:14.819 { 00:08:14.819 "params": { 00:08:14.819 "name": "Nvme$subsystem", 00:08:14.819 "trtype": "$TEST_TRANSPORT", 00:08:14.819 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:14.819 "adrfam": "ipv4", 00:08:14.819 "trsvcid": "$NVMF_PORT", 00:08:14.819 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:14.819 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:14.819 "hdgst": ${hdgst:-false}, 00:08:14.819 "ddgst": ${ddgst:-false} 00:08:14.819 }, 00:08:14.819 "method": "bdev_nvme_attach_controller" 00:08:14.819 } 00:08:14.819 EOF 00:08:14.819 )") 00:08:14.819 14:44:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:14.819 14:44:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:14.819 14:44:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:14.819 14:44:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:14.819 "params": { 00:08:14.819 "name": "Nvme1", 00:08:14.819 "trtype": "tcp", 00:08:14.819 "traddr": "10.0.0.2", 00:08:14.819 "adrfam": "ipv4", 00:08:14.819 "trsvcid": "4420", 00:08:14.819 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:14.819 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:14.819 "hdgst": false, 00:08:14.819 "ddgst": false 00:08:14.819 }, 00:08:14.819 "method": "bdev_nvme_attach_controller" 00:08:14.819 }' 00:08:14.819 [2024-12-11 14:44:57.549378] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:08:14.819 [2024-12-11 14:44:57.549469] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid589907 ] 00:08:15.079 [2024-12-11 14:44:57.617074] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.079 [2024-12-11 14:44:57.675619] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.337 Running I/O for 10 seconds... 00:08:17.206 5226.00 IOPS, 40.83 MiB/s [2024-12-11T13:45:01.355Z] 5320.50 IOPS, 41.57 MiB/s [2024-12-11T13:45:02.289Z] 5329.67 IOPS, 41.64 MiB/s [2024-12-11T13:45:03.226Z] 5337.50 IOPS, 41.70 MiB/s [2024-12-11T13:45:04.161Z] 5341.20 IOPS, 41.73 MiB/s [2024-12-11T13:45:05.097Z] 5352.67 IOPS, 41.82 MiB/s [2024-12-11T13:45:06.031Z] 5355.00 IOPS, 41.84 MiB/s [2024-12-11T13:45:07.407Z] 5361.25 IOPS, 41.88 MiB/s [2024-12-11T13:45:08.343Z] 5363.11 IOPS, 41.90 MiB/s [2024-12-11T13:45:08.343Z] 5365.70 IOPS, 41.92 MiB/s 00:08:25.570 Latency(us) 00:08:25.570 [2024-12-11T13:45:08.343Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:25.570 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:08:25.570 Verification LBA range: start 0x0 length 0x1000 00:08:25.570 Nvme1n1 : 10.06 5344.82 41.76 0.00 0.00 23786.91 1292.52 41554.68 00:08:25.570 [2024-12-11T13:45:08.343Z] =================================================================================================================== 00:08:25.570 [2024-12-11T13:45:08.343Z] Total : 5344.82 41.76 0.00 0.00 23786.91 1292.52 41554.68 00:08:25.570 14:45:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=591694 00:08:25.570 14:45:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:08:25.570 14:45:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:25.570 14:45:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:08:25.570 14:45:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:08:25.570 14:45:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:25.570 14:45:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:25.570 14:45:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:25.570 14:45:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:25.570 { 00:08:25.570 "params": { 00:08:25.570 "name": "Nvme$subsystem", 00:08:25.570 "trtype": "$TEST_TRANSPORT", 00:08:25.570 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:25.570 "adrfam": "ipv4", 00:08:25.570 "trsvcid": "$NVMF_PORT", 00:08:25.570 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:25.570 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:25.570 "hdgst": ${hdgst:-false}, 00:08:25.570 "ddgst": ${ddgst:-false} 00:08:25.570 }, 00:08:25.570 "method": "bdev_nvme_attach_controller" 00:08:25.570 } 00:08:25.570 EOF 00:08:25.570 )") 00:08:25.570 14:45:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:25.570 14:45:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:25.570 14:45:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:25.570 [2024-12-11 14:45:08.247716] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.570 [2024-12-11 14:45:08.247757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.570 14:45:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:25.570 "params": { 00:08:25.570 "name": "Nvme1", 00:08:25.570 "trtype": "tcp", 00:08:25.570 "traddr": "10.0.0.2", 00:08:25.570 "adrfam": "ipv4", 00:08:25.570 "trsvcid": "4420", 00:08:25.570 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:25.570 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:25.570 "hdgst": false, 00:08:25.570 "ddgst": false 00:08:25.570 }, 00:08:25.570 "method": "bdev_nvme_attach_controller" 00:08:25.570 }' 00:08:25.570 [2024-12-11 14:45:08.255677] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.570 [2024-12-11 14:45:08.255702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.570 [2024-12-11 14:45:08.263693] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.570 [2024-12-11 14:45:08.263723] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.570 [2024-12-11 14:45:08.271711] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.570 [2024-12-11 14:45:08.271732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.570 [2024-12-11 14:45:08.279733] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.570 [2024-12-11 14:45:08.279755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.570 [2024-12-11 14:45:08.286091] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:08:25.570 [2024-12-11 14:45:08.286165] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid591694 ] 00:08:25.570 [2024-12-11 14:45:08.287757] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.570 [2024-12-11 14:45:08.287779] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.570 [2024-12-11 14:45:08.295782] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.570 [2024-12-11 14:45:08.295804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.570 [2024-12-11 14:45:08.303800] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.570 [2024-12-11 14:45:08.303836] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.570 [2024-12-11 14:45:08.311835] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.570 [2024-12-11 14:45:08.311856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.570 [2024-12-11 14:45:08.319859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.570 [2024-12-11 14:45:08.319880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.570 [2024-12-11 14:45:08.327878] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.570 [2024-12-11 14:45:08.327913] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.570 [2024-12-11 14:45:08.335911] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.570 [2024-12-11 14:45:08.335931] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.830 [2024-12-11 14:45:08.343936] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.830 [2024-12-11 14:45:08.343957] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.830 [2024-12-11 14:45:08.351953] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.830 [2024-12-11 14:45:08.351973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.830 [2024-12-11 14:45:08.357554] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.830 [2024-12-11 14:45:08.359969] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.830 [2024-12-11 14:45:08.359989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.830 [2024-12-11 14:45:08.368045] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.830 [2024-12-11 14:45:08.368083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.830 [2024-12-11 14:45:08.376046] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.830 [2024-12-11 14:45:08.376082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.830 [2024-12-11 14:45:08.384039] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.830 [2024-12-11 14:45:08.384064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.830 [2024-12-11 14:45:08.392063] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.830 [2024-12-11 14:45:08.392084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.830 [2024-12-11 14:45:08.400081] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.830 [2024-12-11 14:45:08.400112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.830 [2024-12-11 14:45:08.408103] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.830 [2024-12-11 14:45:08.408123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.830 [2024-12-11 14:45:08.416126] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.830 [2024-12-11 14:45:08.416145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.830 [2024-12-11 14:45:08.419506] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.830 [2024-12-11 14:45:08.424144] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.830 [2024-12-11 14:45:08.424164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.830 [2024-12-11 14:45:08.432169] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.830 [2024-12-11 14:45:08.432189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.830 [2024-12-11 14:45:08.440222] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.830 [2024-12-11 14:45:08.440256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.830 [2024-12-11 14:45:08.448242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.830 [2024-12-11 14:45:08.448278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.830 [2024-12-11 14:45:08.456265] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.830 [2024-12-11 14:45:08.456299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.830 [2024-12-11 14:45:08.464284] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.830 [2024-12-11 14:45:08.464319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.830 [2024-12-11 14:45:08.472306] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.830 [2024-12-11 14:45:08.472342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.830 [2024-12-11 14:45:08.480327] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.830 [2024-12-11 14:45:08.480362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.830 [2024-12-11 14:45:08.488319] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.830 [2024-12-11 14:45:08.488340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.830 [2024-12-11 14:45:08.496369] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.830 [2024-12-11 14:45:08.496416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.830 [2024-12-11 14:45:08.504392] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.830 [2024-12-11 14:45:08.504426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.830 [2024-12-11 14:45:08.512424] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.830 [2024-12-11 14:45:08.512462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.830 [2024-12-11 14:45:08.520407] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.830 [2024-12-11 14:45:08.520428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.830 [2024-12-11 14:45:08.528432] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.830 [2024-12-11 14:45:08.528453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.830 [2024-12-11 14:45:08.536448] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.830 [2024-12-11 14:45:08.536468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.830 [2024-12-11 14:45:08.544481] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.830 [2024-12-11 14:45:08.544513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.830 [2024-12-11 14:45:08.552502] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.830 [2024-12-11 14:45:08.552539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.830 [2024-12-11 14:45:08.560563] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.830 [2024-12-11 14:45:08.560585] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.830 [2024-12-11 14:45:08.568567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.830 [2024-12-11 14:45:08.568604] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.830 [2024-12-11 14:45:08.576588] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.830 [2024-12-11 14:45:08.576623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.830 [2024-12-11 14:45:08.584634] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.830 [2024-12-11 14:45:08.584656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.830 [2024-12-11 14:45:08.592640] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.830 [2024-12-11 14:45:08.592662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.830 [2024-12-11 14:45:08.600661] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.830 [2024-12-11 14:45:08.600683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.089 [2024-12-11 14:45:08.608688] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.089 [2024-12-11 14:45:08.608711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.089 [2024-12-11 14:45:08.616699] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.089 [2024-12-11 14:45:08.616723] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.089 [2024-12-11 14:45:08.624720] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.089 [2024-12-11 14:45:08.624745] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.089 [2024-12-11 14:45:08.632742] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.089 [2024-12-11 14:45:08.632765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.089 [2024-12-11 14:45:08.680752] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.089 [2024-12-11 14:45:08.680780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.089 [2024-12-11 14:45:08.684962] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.089 [2024-12-11 14:45:08.684984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.089 [2024-12-11 14:45:08.692976] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.089 [2024-12-11 14:45:08.692996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.089 Running I/O for 5 seconds... 00:08:26.089 [2024-12-11 14:45:08.703851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.089 [2024-12-11 14:45:08.703879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.089 [2024-12-11 14:45:08.713436] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.089 [2024-12-11 14:45:08.713465] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.089 [2024-12-11 14:45:08.725024] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.089 [2024-12-11 14:45:08.725053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.089 [2024-12-11 14:45:08.736169] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.089 [2024-12-11 14:45:08.736213] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.089 [2024-12-11 14:45:08.747479] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.089 [2024-12-11 14:45:08.747513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.089 [2024-12-11 14:45:08.758324] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.089 [2024-12-11 14:45:08.758352] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.089 [2024-12-11 14:45:08.768925] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.089 [2024-12-11 14:45:08.768953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.089 [2024-12-11 14:45:08.779505] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.089 [2024-12-11 14:45:08.779532] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.089 [2024-12-11 14:45:08.791976] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.089 [2024-12-11 14:45:08.792004] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.089 [2024-12-11 14:45:08.802074] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.089 [2024-12-11 14:45:08.802102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.089 [2024-12-11 14:45:08.812472] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.089 [2024-12-11 14:45:08.812499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.089 [2024-12-11 14:45:08.822914] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.089 [2024-12-11 14:45:08.822942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.089 [2024-12-11 14:45:08.833246] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.089 [2024-12-11 14:45:08.833273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.089 [2024-12-11 14:45:08.843842] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.089 [2024-12-11 14:45:08.843869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.089 [2024-12-11 14:45:08.856614] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.089 [2024-12-11 14:45:08.856642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.348 [2024-12-11 14:45:08.866405] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.348 [2024-12-11 14:45:08.866432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.348 [2024-12-11 14:45:08.877528] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.348 [2024-12-11 14:45:08.877564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.348 [2024-12-11 14:45:08.889802] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.348 [2024-12-11 14:45:08.889829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.348 [2024-12-11 14:45:08.898923] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.348 [2024-12-11 14:45:08.898950] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.348 [2024-12-11 14:45:08.910125] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.348 [2024-12-11 14:45:08.910153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.348 [2024-12-11 14:45:08.923581] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.348 [2024-12-11 14:45:08.923616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.348 [2024-12-11 14:45:08.933912] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.348 [2024-12-11 14:45:08.933941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.348 [2024-12-11 14:45:08.944654] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.348 [2024-12-11 14:45:08.944682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.348 [2024-12-11 14:45:08.955482] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.348 [2024-12-11 14:45:08.955523] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.348 [2024-12-11 14:45:08.965807] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.348 [2024-12-11 14:45:08.965834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.348 [2024-12-11 14:45:08.976315] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.348 [2024-12-11 14:45:08.976342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.348 [2024-12-11 14:45:08.986933] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.348 [2024-12-11 14:45:08.986959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.348 [2024-12-11 14:45:08.997403] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.348 [2024-12-11 14:45:08.997430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.348 [2024-12-11 14:45:09.007853] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.348 [2024-12-11 14:45:09.007882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.349 [2024-12-11 14:45:09.018259] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.349 [2024-12-11 14:45:09.018288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.349 [2024-12-11 14:45:09.028784] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.349 [2024-12-11 14:45:09.028812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.349 [2024-12-11 14:45:09.039287] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.349 [2024-12-11 14:45:09.039315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.349 [2024-12-11 14:45:09.049772] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.349 [2024-12-11 14:45:09.049800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.349 [2024-12-11 14:45:09.062069] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.349 [2024-12-11 14:45:09.062097] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.349 [2024-12-11 14:45:09.071183] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.349 [2024-12-11 14:45:09.071210] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.349 [2024-12-11 14:45:09.084042] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.349 [2024-12-11 14:45:09.084070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.349 [2024-12-11 14:45:09.094068] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.349 [2024-12-11 14:45:09.094096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.349 [2024-12-11 14:45:09.104437] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.349 [2024-12-11 14:45:09.104465] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.349 [2024-12-11 14:45:09.115542] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.349 [2024-12-11 14:45:09.115578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.608 [2024-12-11 14:45:09.127985] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.608 [2024-12-11 14:45:09.128013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.608 [2024-12-11 14:45:09.137845] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.608 [2024-12-11 14:45:09.137872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.608 [2024-12-11 14:45:09.148126] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.608 [2024-12-11 14:45:09.148153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.608 [2024-12-11 14:45:09.158314] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.608 [2024-12-11 14:45:09.158341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.608 [2024-12-11 14:45:09.168788] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.608 [2024-12-11 14:45:09.168816] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.608 [2024-12-11 14:45:09.179728] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.608 [2024-12-11 14:45:09.179755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.608 [2024-12-11 14:45:09.190496] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.608 [2024-12-11 14:45:09.190523] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.608 [2024-12-11 14:45:09.203477] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.608 [2024-12-11 14:45:09.203505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.608 [2024-12-11 14:45:09.214044] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.608 [2024-12-11 14:45:09.214071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.608 [2024-12-11 14:45:09.224431] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.608 [2024-12-11 14:45:09.224458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.608 [2024-12-11 14:45:09.235255] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.608 [2024-12-11 14:45:09.235283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.608 [2024-12-11 14:45:09.245945] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.608 [2024-12-11 14:45:09.245972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.608 [2024-12-11 14:45:09.256573] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.608 [2024-12-11 14:45:09.256601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.608 [2024-12-11 14:45:09.267452] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.608 [2024-12-11 14:45:09.267480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.608 [2024-12-11 14:45:09.278344] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.608 [2024-12-11 14:45:09.278371] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.608 [2024-12-11 14:45:09.289497] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.608 [2024-12-11 14:45:09.289524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.608 [2024-12-11 14:45:09.302264] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.608 [2024-12-11 14:45:09.302292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.608 [2024-12-11 14:45:09.314187] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.608 [2024-12-11 14:45:09.314214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.608 [2024-12-11 14:45:09.323953] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.608 [2024-12-11 14:45:09.323980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.608 [2024-12-11 14:45:09.334893] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.608 [2024-12-11 14:45:09.334921] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.608 [2024-12-11 14:45:09.345495] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.608 [2024-12-11 14:45:09.345522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.608 [2024-12-11 14:45:09.356048] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.608 [2024-12-11 14:45:09.356075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.608 [2024-12-11 14:45:09.366955] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.608 [2024-12-11 14:45:09.366983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.608 [2024-12-11 14:45:09.377654] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.608 [2024-12-11 14:45:09.377681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.867 [2024-12-11 14:45:09.388315] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.867 [2024-12-11 14:45:09.388341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.867 [2024-12-11 14:45:09.399144] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.867 [2024-12-11 14:45:09.399171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.867 [2024-12-11 14:45:09.411733] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.867 [2024-12-11 14:45:09.411760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.867 [2024-12-11 14:45:09.421852] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.867 [2024-12-11 14:45:09.421879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.867 [2024-12-11 14:45:09.432405] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.867 [2024-12-11 14:45:09.432432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.867 [2024-12-11 14:45:09.443335] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.867 [2024-12-11 14:45:09.443362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.867 [2024-12-11 14:45:09.455604] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.867 [2024-12-11 14:45:09.455631] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.867 [2024-12-11 14:45:09.465074] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.867 [2024-12-11 14:45:09.465102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.867 [2024-12-11 14:45:09.476265] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.867 [2024-12-11 14:45:09.476293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.867 [2024-12-11 14:45:09.486882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.867 [2024-12-11 14:45:09.486910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.867 [2024-12-11 14:45:09.497346] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.867 [2024-12-11 14:45:09.497373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.867 [2024-12-11 14:45:09.508106] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.867 [2024-12-11 14:45:09.508134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.867 [2024-12-11 14:45:09.518916] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.867 [2024-12-11 14:45:09.518959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.867 [2024-12-11 14:45:09.531998] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.867 [2024-12-11 14:45:09.532040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.867 [2024-12-11 14:45:09.542137] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.867 [2024-12-11 14:45:09.542165] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.867 [2024-12-11 14:45:09.552859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.867 [2024-12-11 14:45:09.552887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.867 [2024-12-11 14:45:09.563911] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.867 [2024-12-11 14:45:09.563938] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.867 [2024-12-11 14:45:09.574663] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.867 [2024-12-11 14:45:09.574690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.867 [2024-12-11 14:45:09.588107] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.867 [2024-12-11 14:45:09.588134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.867 [2024-12-11 14:45:09.598349] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.867 [2024-12-11 14:45:09.598377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.867 [2024-12-11 14:45:09.608881] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.867 [2024-12-11 14:45:09.608908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.867 [2024-12-11 14:45:09.619798] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.867 [2024-12-11 14:45:09.619825] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.867 [2024-12-11 14:45:09.630625] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.867 [2024-12-11 14:45:09.630653] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.126 [2024-12-11 14:45:09.643506] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.126 [2024-12-11 14:45:09.643533] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.126 [2024-12-11 14:45:09.653550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.126 [2024-12-11 14:45:09.653577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.126 [2024-12-11 14:45:09.664123] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.126 [2024-12-11 14:45:09.664150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.126 [2024-12-11 14:45:09.674882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.126 [2024-12-11 14:45:09.674909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.126 [2024-12-11 14:45:09.685708] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.126 [2024-12-11 14:45:09.685736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.126 [2024-12-11 14:45:09.696575] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.126 [2024-12-11 14:45:09.696603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.126 11850.00 IOPS, 92.58 MiB/s [2024-12-11T13:45:09.899Z] [2024-12-11 14:45:09.707451] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.127 [2024-12-11 14:45:09.707478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.127 [2024-12-11 14:45:09.719724] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.127 [2024-12-11 14:45:09.719751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.127 [2024-12-11 14:45:09.728950] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.127 [2024-12-11 14:45:09.728977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.127 [2024-12-11 14:45:09.740490] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.127 [2024-12-11 14:45:09.740518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.127 [2024-12-11 14:45:09.751471] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.127 [2024-12-11 14:45:09.751499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.127 [2024-12-11 14:45:09.762629] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.127 [2024-12-11 14:45:09.762656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.127 [2024-12-11 14:45:09.775250] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.127 [2024-12-11 14:45:09.775285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.127 [2024-12-11 14:45:09.785456] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.127 [2024-12-11 14:45:09.785483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.127 [2024-12-11 14:45:09.796142] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.127 [2024-12-11 14:45:09.796170] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.127 [2024-12-11 14:45:09.810795] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.127 [2024-12-11 14:45:09.810822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.127 [2024-12-11 14:45:09.821354] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.127 [2024-12-11 14:45:09.821381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.127 [2024-12-11 14:45:09.832005] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.127 [2024-12-11 14:45:09.832032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.127 [2024-12-11 14:45:09.842848] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.127 [2024-12-11 14:45:09.842876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.127 [2024-12-11 14:45:09.853841] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.127 [2024-12-11 14:45:09.853868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.127 [2024-12-11 14:45:09.866775] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.127 [2024-12-11 14:45:09.866801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.127 [2024-12-11 14:45:09.876938] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.127 [2024-12-11 14:45:09.876965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.127 [2024-12-11 14:45:09.887784] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.127 [2024-12-11 14:45:09.887812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.385 [2024-12-11 14:45:09.900141] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.385 [2024-12-11 14:45:09.900168] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.385 [2024-12-11 14:45:09.909443] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.385 [2024-12-11 14:45:09.909471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.385 [2024-12-11 14:45:09.920897] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.385 [2024-12-11 14:45:09.920925] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.385 [2024-12-11 14:45:09.933726] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.385 [2024-12-11 14:45:09.933753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.385 [2024-12-11 14:45:09.943954] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.385 [2024-12-11 14:45:09.943981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.385 [2024-12-11 14:45:09.954704] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.385 [2024-12-11 14:45:09.954731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.385 [2024-12-11 14:45:09.967273] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.385 [2024-12-11 14:45:09.967301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.385 [2024-12-11 14:45:09.976953] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.385 [2024-12-11 14:45:09.976981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.385 [2024-12-11 14:45:09.987362] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.385 [2024-12-11 14:45:09.987397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.385 [2024-12-11 14:45:09.997818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.385 [2024-12-11 14:45:09.997861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.385 [2024-12-11 14:45:10.008810] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.385 [2024-12-11 14:45:10.008844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.385 [2024-12-11 14:45:10.019162] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.385 [2024-12-11 14:45:10.019192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.385 [2024-12-11 14:45:10.029663] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.385 [2024-12-11 14:45:10.029693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.385 [2024-12-11 14:45:10.040944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.385 [2024-12-11 14:45:10.040974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.385 [2024-12-11 14:45:10.051605] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.385 [2024-12-11 14:45:10.051633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.385 [2024-12-11 14:45:10.062463] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.385 [2024-12-11 14:45:10.062491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.385 [2024-12-11 14:45:10.075600] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.386 [2024-12-11 14:45:10.075628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.386 [2024-12-11 14:45:10.086125] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.386 [2024-12-11 14:45:10.086153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.386 [2024-12-11 14:45:10.096422] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.386 [2024-12-11 14:45:10.096450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.386 [2024-12-11 14:45:10.107238] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.386 [2024-12-11 14:45:10.107267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.386 [2024-12-11 14:45:10.120627] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.386 [2024-12-11 14:45:10.120655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.386 [2024-12-11 14:45:10.131206] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.386 [2024-12-11 14:45:10.131233] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.386 [2024-12-11 14:45:10.141799] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.386 [2024-12-11 14:45:10.141827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.386 [2024-12-11 14:45:10.152475] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.386 [2024-12-11 14:45:10.152502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.644 [2024-12-11 14:45:10.162628] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.644 [2024-12-11 14:45:10.162656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.644 [2024-12-11 14:45:10.173101] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.644 [2024-12-11 14:45:10.173129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.644 [2024-12-11 14:45:10.183573] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.644 [2024-12-11 14:45:10.183601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.644 [2024-12-11 14:45:10.194303] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.644 [2024-12-11 14:45:10.194343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.644 [2024-12-11 14:45:10.205365] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.644 [2024-12-11 14:45:10.205392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.644 [2024-12-11 14:45:10.216022] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.644 [2024-12-11 14:45:10.216049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.644 [2024-12-11 14:45:10.226615] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.644 [2024-12-11 14:45:10.226643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.644 [2024-12-11 14:45:10.237098] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.644 [2024-12-11 14:45:10.237125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.644 [2024-12-11 14:45:10.247871] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.644 [2024-12-11 14:45:10.247898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.644 [2024-12-11 14:45:10.260690] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.644 [2024-12-11 14:45:10.260729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.644 [2024-12-11 14:45:10.270807] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.644 [2024-12-11 14:45:10.270834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.644 [2024-12-11 14:45:10.281319] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.644 [2024-12-11 14:45:10.281347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.644 [2024-12-11 14:45:10.292063] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.644 [2024-12-11 14:45:10.292090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.644 [2024-12-11 14:45:10.304572] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.644 [2024-12-11 14:45:10.304600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.644 [2024-12-11 14:45:10.315073] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.644 [2024-12-11 14:45:10.315101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.644 [2024-12-11 14:45:10.325854] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.644 [2024-12-11 14:45:10.325882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.644 [2024-12-11 14:45:10.336459] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.644 [2024-12-11 14:45:10.336486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.644 [2024-12-11 14:45:10.346920] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.644 [2024-12-11 14:45:10.346947] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.644 [2024-12-11 14:45:10.359515] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.644 [2024-12-11 14:45:10.359541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.644 [2024-12-11 14:45:10.369833] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.644 [2024-12-11 14:45:10.369861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.644 [2024-12-11 14:45:10.380333] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.645 [2024-12-11 14:45:10.380361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.645 [2024-12-11 14:45:10.391063] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.645 [2024-12-11 14:45:10.391091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.645 [2024-12-11 14:45:10.402049] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.645 [2024-12-11 14:45:10.402085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.645 [2024-12-11 14:45:10.412395] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.645 [2024-12-11 14:45:10.412422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.907 [2024-12-11 14:45:10.422866] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.907 [2024-12-11 14:45:10.422893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.907 [2024-12-11 14:45:10.433235] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.907 [2024-12-11 14:45:10.433263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.907 [2024-12-11 14:45:10.443968] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.907 [2024-12-11 14:45:10.443995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.907 [2024-12-11 14:45:10.454098] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.907 [2024-12-11 14:45:10.454125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.907 [2024-12-11 14:45:10.464867] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.907 [2024-12-11 14:45:10.464894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.907 [2024-12-11 14:45:10.477621] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.907 [2024-12-11 14:45:10.477654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.907 [2024-12-11 14:45:10.489585] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.907 [2024-12-11 14:45:10.489613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.907 [2024-12-11 14:45:10.498732] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.907 [2024-12-11 14:45:10.498759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.907 [2024-12-11 14:45:10.510311] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.907 [2024-12-11 14:45:10.510339] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.907 [2024-12-11 14:45:10.522279] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.907 [2024-12-11 14:45:10.522306] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.907 [2024-12-11 14:45:10.532396] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.907 [2024-12-11 14:45:10.532423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.907 [2024-12-11 14:45:10.542955] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.907 [2024-12-11 14:45:10.542983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.907 [2024-12-11 14:45:10.553858] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.907 [2024-12-11 14:45:10.553886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.907 [2024-12-11 14:45:10.564645] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.907 [2024-12-11 14:45:10.564673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.907 [2024-12-11 14:45:10.577101] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.907 [2024-12-11 14:45:10.577127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.907 [2024-12-11 14:45:10.587444] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.907 [2024-12-11 14:45:10.587471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.907 [2024-12-11 14:45:10.597736] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.907 [2024-12-11 14:45:10.597763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.907 [2024-12-11 14:45:10.608251] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.907 [2024-12-11 14:45:10.608279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.907 [2024-12-11 14:45:10.618435] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.907 [2024-12-11 14:45:10.618462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.907 [2024-12-11 14:45:10.628536] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.907 [2024-12-11 14:45:10.628573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.907 [2024-12-11 14:45:10.639334] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.907 [2024-12-11 14:45:10.639362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.907 [2024-12-11 14:45:10.649965] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.907 [2024-12-11 14:45:10.649993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.907 [2024-12-11 14:45:10.660783] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.907 [2024-12-11 14:45:10.660810] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.907 [2024-12-11 14:45:10.673616] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.907 [2024-12-11 14:45:10.673642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.209 [2024-12-11 14:45:10.684538] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.209 [2024-12-11 14:45:10.684585] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.209 [2024-12-11 14:45:10.697161] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.209 [2024-12-11 14:45:10.697190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.209 11856.50 IOPS, 92.63 MiB/s [2024-12-11T13:45:10.982Z] [2024-12-11 14:45:10.708041] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.209 [2024-12-11 14:45:10.708068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.209 [2024-12-11 14:45:10.718996] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.209 [2024-12-11 14:45:10.719023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.209 [2024-12-11 14:45:10.733803] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.209 [2024-12-11 14:45:10.733830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.209 [2024-12-11 14:45:10.744068] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.209 [2024-12-11 14:45:10.744095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.209 [2024-12-11 14:45:10.754366] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.209 [2024-12-11 14:45:10.754393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.209 [2024-12-11 14:45:10.764665] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.209 [2024-12-11 14:45:10.764692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.209 [2024-12-11 14:45:10.775232] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.209 [2024-12-11 14:45:10.775260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.209 [2024-12-11 14:45:10.785763] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.209 [2024-12-11 14:45:10.785790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.209 [2024-12-11 14:45:10.796676] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.209 [2024-12-11 14:45:10.796703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.209 [2024-12-11 14:45:10.807024] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.209 [2024-12-11 14:45:10.807051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.209 [2024-12-11 14:45:10.817944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.209 [2024-12-11 14:45:10.817971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.209 [2024-12-11 14:45:10.828835] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.209 [2024-12-11 14:45:10.828861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.209 [2024-12-11 14:45:10.841229] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.209 [2024-12-11 14:45:10.841257] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.209 [2024-12-11 14:45:10.851366] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.209 [2024-12-11 14:45:10.851393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.209 [2024-12-11 14:45:10.861666] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.209 [2024-12-11 14:45:10.861693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.209 [2024-12-11 14:45:10.872144] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.209 [2024-12-11 14:45:10.872171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.209 [2024-12-11 14:45:10.882240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.209 [2024-12-11 14:45:10.882267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.209 [2024-12-11 14:45:10.892891] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.209 [2024-12-11 14:45:10.892918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.209 [2024-12-11 14:45:10.903179] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.209 [2024-12-11 14:45:10.903206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.209 [2024-12-11 14:45:10.913906] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.209 [2024-12-11 14:45:10.913933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.209 [2024-12-11 14:45:10.924656] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.209 [2024-12-11 14:45:10.924684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.209 [2024-12-11 14:45:10.935265] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.209 [2024-12-11 14:45:10.935292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.209 [2024-12-11 14:45:10.947685] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.209 [2024-12-11 14:45:10.947721] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.498 [2024-12-11 14:45:10.960333] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.498 [2024-12-11 14:45:10.960368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.498 [2024-12-11 14:45:10.971113] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.498 [2024-12-11 14:45:10.971141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.498 [2024-12-11 14:45:10.981809] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.498 [2024-12-11 14:45:10.981836] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.498 [2024-12-11 14:45:10.994518] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.498 [2024-12-11 14:45:10.994555] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.498 [2024-12-11 14:45:11.005103] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.498 [2024-12-11 14:45:11.005132] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.498 [2024-12-11 14:45:11.017793] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.498 [2024-12-11 14:45:11.017830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.498 [2024-12-11 14:45:11.027932] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.498 [2024-12-11 14:45:11.027959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.498 [2024-12-11 14:45:11.038432] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.498 [2024-12-11 14:45:11.038459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.498 [2024-12-11 14:45:11.049039] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.498 [2024-12-11 14:45:11.049066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.498 [2024-12-11 14:45:11.060061] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.498 [2024-12-11 14:45:11.060089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.498 [2024-12-11 14:45:11.073492] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.498 [2024-12-11 14:45:11.073519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.498 [2024-12-11 14:45:11.085680] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.498 [2024-12-11 14:45:11.085708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.498 [2024-12-11 14:45:11.095452] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.498 [2024-12-11 14:45:11.095479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.498 [2024-12-11 14:45:11.105848] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.498 [2024-12-11 14:45:11.105875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.498 [2024-12-11 14:45:11.116381] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.498 [2024-12-11 14:45:11.116407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.498 [2024-12-11 14:45:11.126995] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.498 [2024-12-11 14:45:11.127022] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.498 [2024-12-11 14:45:11.137521] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.498 [2024-12-11 14:45:11.137557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.498 [2024-12-11 14:45:11.148202] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.498 [2024-12-11 14:45:11.148229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.498 [2024-12-11 14:45:11.159416] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.498 [2024-12-11 14:45:11.159443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.498 [2024-12-11 14:45:11.172576] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.498 [2024-12-11 14:45:11.172603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.498 [2024-12-11 14:45:11.182369] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.498 [2024-12-11 14:45:11.182396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.498 [2024-12-11 14:45:11.192719] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.498 [2024-12-11 14:45:11.192745] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.498 [2024-12-11 14:45:11.203211] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.498 [2024-12-11 14:45:11.203238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.498 [2024-12-11 14:45:11.213731] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.498 [2024-12-11 14:45:11.213757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.498 [2024-12-11 14:45:11.224280] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.498 [2024-12-11 14:45:11.224317] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.498 [2024-12-11 14:45:11.235139] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.498 [2024-12-11 14:45:11.235166] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.498 [2024-12-11 14:45:11.245319] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.498 [2024-12-11 14:45:11.245346] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.498 [2024-12-11 14:45:11.255555] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.498 [2024-12-11 14:45:11.255582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.498 [2024-12-11 14:45:11.266250] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.498 [2024-12-11 14:45:11.266277] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.757 [2024-12-11 14:45:11.277044] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.757 [2024-12-11 14:45:11.277071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.757 [2024-12-11 14:45:11.287700] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.758 [2024-12-11 14:45:11.287728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.758 [2024-12-11 14:45:11.300081] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.758 [2024-12-11 14:45:11.300109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.758 [2024-12-11 14:45:11.309936] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.758 [2024-12-11 14:45:11.309964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.758 [2024-12-11 14:45:11.320798] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.758 [2024-12-11 14:45:11.320826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.758 [2024-12-11 14:45:11.331443] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.758 [2024-12-11 14:45:11.331470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.758 [2024-12-11 14:45:11.342195] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.758 [2024-12-11 14:45:11.342221] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.758 [2024-12-11 14:45:11.353172] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.758 [2024-12-11 14:45:11.353198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.758 [2024-12-11 14:45:11.365784] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.758 [2024-12-11 14:45:11.365811] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.758 [2024-12-11 14:45:11.376232] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.758 [2024-12-11 14:45:11.376259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.758 [2024-12-11 14:45:11.387050] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.758 [2024-12-11 14:45:11.387077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.758 [2024-12-11 14:45:11.399715] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.758 [2024-12-11 14:45:11.399742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.758 [2024-12-11 14:45:11.409866] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.758 [2024-12-11 14:45:11.409893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.758 [2024-12-11 14:45:11.420323] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.758 [2024-12-11 14:45:11.420351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.758 [2024-12-11 14:45:11.430863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.758 [2024-12-11 14:45:11.430899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.758 [2024-12-11 14:45:11.441583] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.758 [2024-12-11 14:45:11.441618] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.758 [2024-12-11 14:45:11.454192] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.758 [2024-12-11 14:45:11.454221] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.758 [2024-12-11 14:45:11.466859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.758 [2024-12-11 14:45:11.466894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.758 [2024-12-11 14:45:11.477227] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.758 [2024-12-11 14:45:11.477256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.758 [2024-12-11 14:45:11.488652] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.758 [2024-12-11 14:45:11.488680] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.758 [2024-12-11 14:45:11.499326] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.758 [2024-12-11 14:45:11.499360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.758 [2024-12-11 14:45:11.509979] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.758 [2024-12-11 14:45:11.510007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.758 [2024-12-11 14:45:11.523203] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.758 [2024-12-11 14:45:11.523231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.017 [2024-12-11 14:45:11.533963] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.017 [2024-12-11 14:45:11.533992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.017 [2024-12-11 14:45:11.545093] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.017 [2024-12-11 14:45:11.545121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.017 [2024-12-11 14:45:11.556134] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.017 [2024-12-11 14:45:11.556162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.017 [2024-12-11 14:45:11.566732] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.017 [2024-12-11 14:45:11.566760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.017 [2024-12-11 14:45:11.579309] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.017 [2024-12-11 14:45:11.579336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.017 [2024-12-11 14:45:11.589322] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.017 [2024-12-11 14:45:11.589349] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.017 [2024-12-11 14:45:11.600458] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.017 [2024-12-11 14:45:11.600485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.017 [2024-12-11 14:45:11.612930] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.017 [2024-12-11 14:45:11.612958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.017 [2024-12-11 14:45:11.624730] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.017 [2024-12-11 14:45:11.624758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.017 [2024-12-11 14:45:11.634047] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.017 [2024-12-11 14:45:11.634074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.017 [2024-12-11 14:45:11.645462] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.017 [2024-12-11 14:45:11.645496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.017 [2024-12-11 14:45:11.658501] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.017 [2024-12-11 14:45:11.658528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.017 [2024-12-11 14:45:11.668688] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.017 [2024-12-11 14:45:11.668715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.017 [2024-12-11 14:45:11.678886] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.017 [2024-12-11 14:45:11.678913] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.017 [2024-12-11 14:45:11.689541] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.017 [2024-12-11 14:45:11.689577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.017 [2024-12-11 14:45:11.700160] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.017 [2024-12-11 14:45:11.700187] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.017 11862.00 IOPS, 92.67 MiB/s [2024-12-11T13:45:11.790Z] [2024-12-11 14:45:11.710587] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.017 [2024-12-11 14:45:11.710614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.017 [2024-12-11 14:45:11.721329] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.017 [2024-12-11 14:45:11.721356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.017 [2024-12-11 14:45:11.732054] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.017 [2024-12-11 14:45:11.732097] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.017 [2024-12-11 14:45:11.744748] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.017 [2024-12-11 14:45:11.744776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.017 [2024-12-11 14:45:11.754729] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.017 [2024-12-11 14:45:11.754756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.017 [2024-12-11 14:45:11.765166] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.017 [2024-12-11 14:45:11.765193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.017 [2024-12-11 14:45:11.775658] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.017 [2024-12-11 14:45:11.775685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.017 [2024-12-11 14:45:11.786296] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.017 [2024-12-11 14:45:11.786323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.277 [2024-12-11 14:45:11.797441] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.277 [2024-12-11 14:45:11.797470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.277 [2024-12-11 14:45:11.807623] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.277 [2024-12-11 14:45:11.807651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.277 [2024-12-11 14:45:11.817945] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.277 [2024-12-11 14:45:11.817972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.277 [2024-12-11 14:45:11.828457] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.277 [2024-12-11 14:45:11.828485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.277 [2024-12-11 14:45:11.838905] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.277 [2024-12-11 14:45:11.838932] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.277 [2024-12-11 14:45:11.849167] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.277 [2024-12-11 14:45:11.849194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.277 [2024-12-11 14:45:11.859647] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.277 [2024-12-11 14:45:11.859675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.277 [2024-12-11 14:45:11.870262] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.277 [2024-12-11 14:45:11.870289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.277 [2024-12-11 14:45:11.880584] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.277 [2024-12-11 14:45:11.880611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.277 [2024-12-11 14:45:11.890894] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.277 [2024-12-11 14:45:11.890921] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.277 [2024-12-11 14:45:11.901192] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.277 [2024-12-11 14:45:11.901220] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.277 [2024-12-11 14:45:11.911735] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.277 [2024-12-11 14:45:11.911762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.277 [2024-12-11 14:45:11.922293] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.277 [2024-12-11 14:45:11.922320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.277 [2024-12-11 14:45:11.932612] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.277 [2024-12-11 14:45:11.932640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.277 [2024-12-11 14:45:11.942777] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.277 [2024-12-11 14:45:11.942804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.277 [2024-12-11 14:45:11.953492] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.277 [2024-12-11 14:45:11.953519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.277 [2024-12-11 14:45:11.963844] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.277 [2024-12-11 14:45:11.963871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.277 [2024-12-11 14:45:11.974305] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.277 [2024-12-11 14:45:11.974347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.277 [2024-12-11 14:45:11.985249] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.277 [2024-12-11 14:45:11.985276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.277 [2024-12-11 14:45:11.995558] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.277 [2024-12-11 14:45:11.995586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.277 [2024-12-11 14:45:12.006536] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.277 [2024-12-11 14:45:12.006573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.277 [2024-12-11 14:45:12.019160] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.277 [2024-12-11 14:45:12.019188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.277 [2024-12-11 14:45:12.031046] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.277 [2024-12-11 14:45:12.031080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.277 [2024-12-11 14:45:12.040374] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.277 [2024-12-11 14:45:12.040401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.536 [2024-12-11 14:45:12.051727] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.536 [2024-12-11 14:45:12.051762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.536 [2024-12-11 14:45:12.061995] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.536 [2024-12-11 14:45:12.062022] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.536 [2024-12-11 14:45:12.072730] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.536 [2024-12-11 14:45:12.072758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.536 [2024-12-11 14:45:12.085497] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.536 [2024-12-11 14:45:12.085524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.536 [2024-12-11 14:45:12.095119] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.536 [2024-12-11 14:45:12.095146] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.536 [2024-12-11 14:45:12.105801] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.536 [2024-12-11 14:45:12.105828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.536 [2024-12-11 14:45:12.116389] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.536 [2024-12-11 14:45:12.116417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.536 [2024-12-11 14:45:12.126996] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.536 [2024-12-11 14:45:12.127023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.536 [2024-12-11 14:45:12.137942] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.536 [2024-12-11 14:45:12.137969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.536 [2024-12-11 14:45:12.150513] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.536 [2024-12-11 14:45:12.150541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.536 [2024-12-11 14:45:12.160630] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.536 [2024-12-11 14:45:12.160658] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.536 [2024-12-11 14:45:12.171624] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.536 [2024-12-11 14:45:12.171652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.536 [2024-12-11 14:45:12.184219] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.536 [2024-12-11 14:45:12.184246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.536 [2024-12-11 14:45:12.194251] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.537 [2024-12-11 14:45:12.194278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.537 [2024-12-11 14:45:12.205108] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.537 [2024-12-11 14:45:12.205136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.537 [2024-12-11 14:45:12.217373] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.537 [2024-12-11 14:45:12.217400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.537 [2024-12-11 14:45:12.227473] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.537 [2024-12-11 14:45:12.227500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.537 [2024-12-11 14:45:12.238325] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.537 [2024-12-11 14:45:12.238351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.537 [2024-12-11 14:45:12.248715] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.537 [2024-12-11 14:45:12.248742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.537 [2024-12-11 14:45:12.259166] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.537 [2024-12-11 14:45:12.259193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.537 [2024-12-11 14:45:12.269515] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.537 [2024-12-11 14:45:12.269564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.537 [2024-12-11 14:45:12.279723] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.537 [2024-12-11 14:45:12.279750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.537 [2024-12-11 14:45:12.290559] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.537 [2024-12-11 14:45:12.290586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.537 [2024-12-11 14:45:12.302662] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.537 [2024-12-11 14:45:12.302690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.796 [2024-12-11 14:45:12.313068] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.796 [2024-12-11 14:45:12.313096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.796 [2024-12-11 14:45:12.323894] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.796 [2024-12-11 14:45:12.323922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.796 [2024-12-11 14:45:12.336341] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.796 [2024-12-11 14:45:12.336369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.796 [2024-12-11 14:45:12.346490] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.796 [2024-12-11 14:45:12.346517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.796 [2024-12-11 14:45:12.356880] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.796 [2024-12-11 14:45:12.356907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.796 [2024-12-11 14:45:12.367512] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.796 [2024-12-11 14:45:12.367540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.796 [2024-12-11 14:45:12.378539] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.796 [2024-12-11 14:45:12.378575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.796 [2024-12-11 14:45:12.390922] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.796 [2024-12-11 14:45:12.390949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.796 [2024-12-11 14:45:12.400831] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.796 [2024-12-11 14:45:12.400858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.796 [2024-12-11 14:45:12.411536] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.796 [2024-12-11 14:45:12.411574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.796 [2024-12-11 14:45:12.424372] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.796 [2024-12-11 14:45:12.424399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.796 [2024-12-11 14:45:12.436265] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.796 [2024-12-11 14:45:12.436292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.796 [2024-12-11 14:45:12.445796] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.796 [2024-12-11 14:45:12.445823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.796 [2024-12-11 14:45:12.457611] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.796 [2024-12-11 14:45:12.457649] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.796 [2024-12-11 14:45:12.470190] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.796 [2024-12-11 14:45:12.470218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.796 [2024-12-11 14:45:12.480512] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.796 [2024-12-11 14:45:12.480539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.796 [2024-12-11 14:45:12.491238] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.796 [2024-12-11 14:45:12.491265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.796 [2024-12-11 14:45:12.502119] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.796 [2024-12-11 14:45:12.502146] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.796 [2024-12-11 14:45:12.512845] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.796 [2024-12-11 14:45:12.512872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.796 [2024-12-11 14:45:12.525566] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.796 [2024-12-11 14:45:12.525594] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.796 [2024-12-11 14:45:12.537183] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.796 [2024-12-11 14:45:12.537210] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.796 [2024-12-11 14:45:12.546413] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.796 [2024-12-11 14:45:12.546440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.796 [2024-12-11 14:45:12.558129] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.796 [2024-12-11 14:45:12.558158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.055 [2024-12-11 14:45:12.568855] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.055 [2024-12-11 14:45:12.568884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.055 [2024-12-11 14:45:12.579588] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.055 [2024-12-11 14:45:12.579618] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.055 [2024-12-11 14:45:12.590561] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.055 [2024-12-11 14:45:12.590600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.055 [2024-12-11 14:45:12.601402] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.055 [2024-12-11 14:45:12.601430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.055 [2024-12-11 14:45:12.613532] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.055 [2024-12-11 14:45:12.613569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.056 [2024-12-11 14:45:12.623426] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.056 [2024-12-11 14:45:12.623454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.056 [2024-12-11 14:45:12.634383] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.056 [2024-12-11 14:45:12.634410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.056 [2024-12-11 14:45:12.646775] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.056 [2024-12-11 14:45:12.646803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.056 [2024-12-11 14:45:12.657411] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.056 [2024-12-11 14:45:12.657439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.056 [2024-12-11 14:45:12.668000] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.056 [2024-12-11 14:45:12.668036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.056 [2024-12-11 14:45:12.681221] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.056 [2024-12-11 14:45:12.681248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.056 [2024-12-11 14:45:12.691106] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.056 [2024-12-11 14:45:12.691134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.056 [2024-12-11 14:45:12.702151] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.056 [2024-12-11 14:45:12.702179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.056 11891.25 IOPS, 92.90 MiB/s [2024-12-11T13:45:12.829Z] [2024-12-11 14:45:12.715016] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.056 [2024-12-11 14:45:12.715043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.056 [2024-12-11 14:45:12.725350] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.056 [2024-12-11 14:45:12.725377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.056 [2024-12-11 14:45:12.736143] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.056 [2024-12-11 14:45:12.736171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.056 [2024-12-11 14:45:12.748517] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.056 [2024-12-11 14:45:12.748553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.056 [2024-12-11 14:45:12.758289] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.056 [2024-12-11 14:45:12.758316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.056 [2024-12-11 14:45:12.768827] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.056 [2024-12-11 14:45:12.768854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.056 [2024-12-11 14:45:12.780866] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.056 [2024-12-11 14:45:12.780893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.056 [2024-12-11 14:45:12.790101] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.056 [2024-12-11 14:45:12.790128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.056 [2024-12-11 14:45:12.801352] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.056 [2024-12-11 14:45:12.801379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.056 [2024-12-11 14:45:12.813610] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.056 [2024-12-11 14:45:12.813637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.056 [2024-12-11 14:45:12.823435] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.056 [2024-12-11 14:45:12.823462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.315 [2024-12-11 14:45:12.834377] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.315 [2024-12-11 14:45:12.834405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.315 [2024-12-11 14:45:12.844893] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.315 [2024-12-11 14:45:12.844920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.315 [2024-12-11 14:45:12.855326] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.315 [2024-12-11 14:45:12.855353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.315 [2024-12-11 14:45:12.866115] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.315 [2024-12-11 14:45:12.866143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.315 [2024-12-11 14:45:12.878693] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.315 [2024-12-11 14:45:12.878728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.315 [2024-12-11 14:45:12.890323] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.315 [2024-12-11 14:45:12.890350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.315 [2024-12-11 14:45:12.899292] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.315 [2024-12-11 14:45:12.899319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.315 [2024-12-11 14:45:12.910523] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.315 [2024-12-11 14:45:12.910559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.315 [2024-12-11 14:45:12.922596] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.315 [2024-12-11 14:45:12.922624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.315 [2024-12-11 14:45:12.932234] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.315 [2024-12-11 14:45:12.932261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.315 [2024-12-11 14:45:12.945213] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.315 [2024-12-11 14:45:12.945241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.315 [2024-12-11 14:45:12.955375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.315 [2024-12-11 14:45:12.955402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.315 [2024-12-11 14:45:12.966045] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.315 [2024-12-11 14:45:12.966072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.315 [2024-12-11 14:45:12.976807] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.315 [2024-12-11 14:45:12.976834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.315 [2024-12-11 14:45:12.987245] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.315 [2024-12-11 14:45:12.987272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.315 [2024-12-11 14:45:12.998251] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.315 [2024-12-11 14:45:12.998278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.315 [2024-12-11 14:45:13.008787] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.315 [2024-12-11 14:45:13.008815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.315 [2024-12-11 14:45:13.021267] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.315 [2024-12-11 14:45:13.021294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.315 [2024-12-11 14:45:13.031612] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.315 [2024-12-11 14:45:13.031639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.315 [2024-12-11 14:45:13.042075] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.315 [2024-12-11 14:45:13.042103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.315 [2024-12-11 14:45:13.052570] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.315 [2024-12-11 14:45:13.052596] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.315 [2024-12-11 14:45:13.062951] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.315 [2024-12-11 14:45:13.062978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.315 [2024-12-11 14:45:13.073385] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.315 [2024-12-11 14:45:13.073412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.315 [2024-12-11 14:45:13.084128] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.315 [2024-12-11 14:45:13.084159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.574 [2024-12-11 14:45:13.095044] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.574 [2024-12-11 14:45:13.095072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.574 [2024-12-11 14:45:13.105778] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.574 [2024-12-11 14:45:13.105806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.574 [2024-12-11 14:45:13.116447] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.574 [2024-12-11 14:45:13.116474] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.574 [2024-12-11 14:45:13.129127] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.574 [2024-12-11 14:45:13.129155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.574 [2024-12-11 14:45:13.140806] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.574 [2024-12-11 14:45:13.140833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.574 [2024-12-11 14:45:13.150338] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.574 [2024-12-11 14:45:13.150367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.574 [2024-12-11 14:45:13.161457] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.574 [2024-12-11 14:45:13.161484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.574 [2024-12-11 14:45:13.171877] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.574 [2024-12-11 14:45:13.171919] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.574 [2024-12-11 14:45:13.182292] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.574 [2024-12-11 14:45:13.182320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.574 [2024-12-11 14:45:13.192672] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.574 [2024-12-11 14:45:13.192700] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.574 [2024-12-11 14:45:13.203466] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.574 [2024-12-11 14:45:13.203493] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.574 [2024-12-11 14:45:13.214178] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.574 [2024-12-11 14:45:13.214205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.574 [2024-12-11 14:45:13.226444] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.574 [2024-12-11 14:45:13.226472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.574 [2024-12-11 14:45:13.236529] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.574 [2024-12-11 14:45:13.236565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.574 [2024-12-11 14:45:13.247108] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.574 [2024-12-11 14:45:13.247135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.574 [2024-12-11 14:45:13.257775] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.574 [2024-12-11 14:45:13.257803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.574 [2024-12-11 14:45:13.268291] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.574 [2024-12-11 14:45:13.268318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.574 [2024-12-11 14:45:13.279411] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.574 [2024-12-11 14:45:13.279438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.574 [2024-12-11 14:45:13.289832] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.574 [2024-12-11 14:45:13.289860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.574 [2024-12-11 14:45:13.300435] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.574 [2024-12-11 14:45:13.300462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.574 [2024-12-11 14:45:13.311357] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.574 [2024-12-11 14:45:13.311385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.574 [2024-12-11 14:45:13.324208] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.574 [2024-12-11 14:45:13.324236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.574 [2024-12-11 14:45:13.336010] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.574 [2024-12-11 14:45:13.336037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.574 [2024-12-11 14:45:13.345987] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.574 [2024-12-11 14:45:13.346015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.847 [2024-12-11 14:45:13.357179] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.847 [2024-12-11 14:45:13.357207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.847 [2024-12-11 14:45:13.369888] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.847 [2024-12-11 14:45:13.369915] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.847 [2024-12-11 14:45:13.380335] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.847 [2024-12-11 14:45:13.380362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.847 [2024-12-11 14:45:13.390997] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.847 [2024-12-11 14:45:13.391024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.847 [2024-12-11 14:45:13.401670] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.847 [2024-12-11 14:45:13.401698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.847 [2024-12-11 14:45:13.412006] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.847 [2024-12-11 14:45:13.412034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.847 [2024-12-11 14:45:13.422790] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.847 [2024-12-11 14:45:13.422817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.847 [2024-12-11 14:45:13.435306] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.847 [2024-12-11 14:45:13.435333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.847 [2024-12-11 14:45:13.444356] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.847 [2024-12-11 14:45:13.444384] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.847 [2024-12-11 14:45:13.457325] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.847 [2024-12-11 14:45:13.457353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.847 [2024-12-11 14:45:13.467439] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.847 [2024-12-11 14:45:13.467481] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.847 [2024-12-11 14:45:13.477741] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.847 [2024-12-11 14:45:13.477768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.847 [2024-12-11 14:45:13.488302] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.847 [2024-12-11 14:45:13.488330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.847 [2024-12-11 14:45:13.509366] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.847 [2024-12-11 14:45:13.509397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.847 [2024-12-11 14:45:13.519705] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.847 [2024-12-11 14:45:13.519732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.847 [2024-12-11 14:45:13.530228] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.847 [2024-12-11 14:45:13.530255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.847 [2024-12-11 14:45:13.541096] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.847 [2024-12-11 14:45:13.541123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.847 [2024-12-11 14:45:13.551860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.847 [2024-12-11 14:45:13.551886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.847 [2024-12-11 14:45:13.564329] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.848 [2024-12-11 14:45:13.564356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.848 [2024-12-11 14:45:13.575989] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.848 [2024-12-11 14:45:13.576017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.848 [2024-12-11 14:45:13.585890] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.848 [2024-12-11 14:45:13.585918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.848 [2024-12-11 14:45:13.596390] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.848 [2024-12-11 14:45:13.596417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.848 [2024-12-11 14:45:13.607362] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.848 [2024-12-11 14:45:13.607390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.107 [2024-12-11 14:45:13.620164] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.107 [2024-12-11 14:45:13.620201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.107 [2024-12-11 14:45:13.630699] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.107 [2024-12-11 14:45:13.630726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.107 [2024-12-11 14:45:13.641344] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.107 [2024-12-11 14:45:13.641371] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.107 [2024-12-11 14:45:13.652032] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.107 [2024-12-11 14:45:13.652058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.107 [2024-12-11 14:45:13.662649] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.107 [2024-12-11 14:45:13.662676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.107 [2024-12-11 14:45:13.673091] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.107 [2024-12-11 14:45:13.673118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.107 [2024-12-11 14:45:13.684168] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.107 [2024-12-11 14:45:13.684195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.107 [2024-12-11 14:45:13.696894] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.107 [2024-12-11 14:45:13.696922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.107 [2024-12-11 14:45:13.708491] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.107 [2024-12-11 14:45:13.708519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.107 11914.40 IOPS, 93.08 MiB/s [2024-12-11T13:45:13.880Z] [2024-12-11 14:45:13.716797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.107 [2024-12-11 14:45:13.716824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.107 00:08:31.107 Latency(us) 00:08:31.107 [2024-12-11T13:45:13.880Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:31.107 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:08:31.107 Nvme1n1 : 5.01 11915.73 93.09 0.00 0.00 10728.22 4466.16 18835.53 00:08:31.107 [2024-12-11T13:45:13.880Z] =================================================================================================================== 00:08:31.107 [2024-12-11T13:45:13.880Z] Total : 11915.73 93.09 0.00 0.00 10728.22 4466.16 18835.53 00:08:31.107 [2024-12-11 14:45:13.723653] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.107 [2024-12-11 14:45:13.723678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.107 [2024-12-11 14:45:13.731675] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.107 [2024-12-11 14:45:13.731699] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.107 [2024-12-11 14:45:13.739695] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.107 [2024-12-11 14:45:13.739718] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.107 [2024-12-11 14:45:13.747779] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.107 [2024-12-11 14:45:13.747827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.107 [2024-12-11 14:45:13.755797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.107 [2024-12-11 14:45:13.755846] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.107 [2024-12-11 14:45:13.763812] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.107 [2024-12-11 14:45:13.763859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.107 [2024-12-11 14:45:13.771834] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.107 [2024-12-11 14:45:13.771882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.107 [2024-12-11 14:45:13.779854] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.107 [2024-12-11 14:45:13.779902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.107 [2024-12-11 14:45:13.787901] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.107 [2024-12-11 14:45:13.787953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.107 [2024-12-11 14:45:13.795901] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.107 [2024-12-11 14:45:13.795945] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.107 [2024-12-11 14:45:13.803946] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.107 [2024-12-11 14:45:13.803995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.107 [2024-12-11 14:45:13.811943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.107 [2024-12-11 14:45:13.812002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.107 [2024-12-11 14:45:13.819971] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.107 [2024-12-11 14:45:13.820018] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.107 [2024-12-11 14:45:13.827987] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.107 [2024-12-11 14:45:13.828032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.107 [2024-12-11 14:45:13.836014] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.107 [2024-12-11 14:45:13.836073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.107 [2024-12-11 14:45:13.844027] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.107 [2024-12-11 14:45:13.844071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.107 [2024-12-11 14:45:13.852047] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.107 [2024-12-11 14:45:13.852094] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.107 [2024-12-11 14:45:13.860037] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.107 [2024-12-11 14:45:13.860068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.107 [2024-12-11 14:45:13.868036] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.107 [2024-12-11 14:45:13.868056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.107 [2024-12-11 14:45:13.876089] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.107 [2024-12-11 14:45:13.876113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.367 [2024-12-11 14:45:13.884083] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.367 [2024-12-11 14:45:13.884105] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.367 [2024-12-11 14:45:13.892110] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.367 [2024-12-11 14:45:13.892134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.367 [2024-12-11 14:45:13.900183] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.367 [2024-12-11 14:45:13.900231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.367 [2024-12-11 14:45:13.908201] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.367 [2024-12-11 14:45:13.908248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.367 [2024-12-11 14:45:13.916186] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.367 [2024-12-11 14:45:13.916214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.367 [2024-12-11 14:45:13.924186] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.367 [2024-12-11 14:45:13.924206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.367 [2024-12-11 14:45:13.932206] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.367 [2024-12-11 14:45:13.932241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.367 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (591694) - No such process 00:08:31.367 14:45:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 591694 00:08:31.367 14:45:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:31.367 14:45:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.367 14:45:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:31.367 14:45:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.367 14:45:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:31.367 14:45:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.367 14:45:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:31.367 delay0 00:08:31.367 14:45:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.367 14:45:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:08:31.367 14:45:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.367 14:45:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:31.367 14:45:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.367 14:45:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:08:31.367 [2024-12-11 14:45:14.107639] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:37.924 Initializing NVMe Controllers 00:08:37.924 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:37.924 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:37.924 Initialization complete. Launching workers. 00:08:37.924 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 48 00:08:37.924 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 338, failed to submit 30 00:08:37.924 success 142, unsuccessful 196, failed 0 00:08:37.924 14:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:08:37.924 14:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:08:37.924 14:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:37.924 14:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:08:37.924 14:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:37.924 14:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:08:37.924 14:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:37.924 14:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:37.924 rmmod nvme_tcp 00:08:37.924 rmmod nvme_fabrics 00:08:37.924 rmmod nvme_keyring 00:08:37.924 14:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:37.924 14:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:08:37.924 14:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:08:37.924 14:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 589879 ']' 00:08:37.924 14:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 589879 00:08:37.924 14:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 589879 ']' 00:08:37.924 14:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 589879 00:08:37.924 14:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:08:37.924 14:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:37.924 14:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 589879 00:08:37.924 14:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:37.924 14:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:37.924 14:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 589879' 00:08:37.924 killing process with pid 589879 00:08:37.924 14:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 589879 00:08:37.924 14:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 589879 00:08:37.925 14:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:37.925 14:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:37.925 14:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:37.925 14:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:08:37.925 14:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:08:37.925 14:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:37.925 14:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:08:37.925 14:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:37.925 14:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:37.925 14:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:37.925 14:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:37.925 14:45:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:40.466 14:45:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:40.466 00:08:40.466 real 0m28.021s 00:08:40.466 user 0m40.023s 00:08:40.466 sys 0m8.749s 00:08:40.466 14:45:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:40.466 14:45:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:40.466 ************************************ 00:08:40.466 END TEST nvmf_zcopy 00:08:40.466 ************************************ 00:08:40.466 14:45:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:40.466 14:45:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:40.466 14:45:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:40.466 14:45:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:40.466 ************************************ 00:08:40.466 START TEST nvmf_nmic 00:08:40.466 ************************************ 00:08:40.466 14:45:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:40.466 * Looking for test storage... 00:08:40.466 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:40.466 14:45:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:40.467 14:45:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:08:40.467 14:45:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:40.467 14:45:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:40.467 14:45:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:40.467 14:45:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:40.467 14:45:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:40.467 14:45:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:08:40.467 14:45:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:08:40.467 14:45:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:08:40.467 14:45:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:08:40.467 14:45:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:08:40.467 14:45:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:08:40.467 14:45:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:08:40.467 14:45:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:40.467 14:45:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:08:40.467 14:45:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:08:40.467 14:45:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:40.467 14:45:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:40.467 14:45:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:08:40.467 14:45:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:08:40.467 14:45:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:40.467 14:45:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:08:40.467 14:45:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:08:40.467 14:45:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:08:40.467 14:45:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:08:40.467 14:45:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:40.467 14:45:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:08:40.467 14:45:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:08:40.467 14:45:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:40.467 14:45:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:40.467 14:45:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:08:40.467 14:45:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:40.467 14:45:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:40.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.467 --rc genhtml_branch_coverage=1 00:08:40.467 --rc genhtml_function_coverage=1 00:08:40.467 --rc genhtml_legend=1 00:08:40.467 --rc geninfo_all_blocks=1 00:08:40.467 --rc geninfo_unexecuted_blocks=1 00:08:40.467 00:08:40.467 ' 00:08:40.467 14:45:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:40.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.467 --rc genhtml_branch_coverage=1 00:08:40.467 --rc genhtml_function_coverage=1 00:08:40.467 --rc genhtml_legend=1 00:08:40.467 --rc geninfo_all_blocks=1 00:08:40.467 --rc geninfo_unexecuted_blocks=1 00:08:40.467 00:08:40.467 ' 00:08:40.467 14:45:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:40.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.467 --rc genhtml_branch_coverage=1 00:08:40.467 --rc genhtml_function_coverage=1 00:08:40.467 --rc genhtml_legend=1 00:08:40.467 --rc geninfo_all_blocks=1 00:08:40.467 --rc geninfo_unexecuted_blocks=1 00:08:40.467 00:08:40.467 ' 00:08:40.467 14:45:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:40.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.467 --rc genhtml_branch_coverage=1 00:08:40.467 --rc genhtml_function_coverage=1 00:08:40.467 --rc genhtml_legend=1 00:08:40.467 --rc geninfo_all_blocks=1 00:08:40.467 --rc geninfo_unexecuted_blocks=1 00:08:40.467 00:08:40.467 ' 00:08:40.467 14:45:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:40.467 14:45:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:08:40.467 14:45:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:40.467 14:45:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:40.467 14:45:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:40.467 14:45:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:40.467 14:45:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:40.467 14:45:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:40.467 14:45:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:40.467 14:45:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:40.467 14:45:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:40.467 14:45:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:40.467 14:45:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:40.467 14:45:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:40.467 14:45:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:40.467 14:45:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:40.467 14:45:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:40.467 14:45:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:40.467 14:45:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:40.467 14:45:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:08:40.467 14:45:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:40.467 14:45:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:40.467 14:45:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:40.467 14:45:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.467 14:45:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.467 14:45:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.467 14:45:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:08:40.467 14:45:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.467 14:45:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:08:40.467 14:45:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:40.467 14:45:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:40.467 14:45:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:40.467 14:45:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:40.467 14:45:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:40.467 14:45:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:40.467 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:40.468 14:45:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:40.468 14:45:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:40.468 14:45:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:40.468 14:45:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:40.468 14:45:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:40.468 14:45:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:08:40.468 14:45:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:40.468 14:45:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:40.468 14:45:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:40.468 14:45:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:40.468 14:45:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:40.468 14:45:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:40.468 14:45:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:40.468 14:45:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:40.468 14:45:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:40.468 14:45:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:40.468 14:45:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:08:40.468 14:45:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:42.376 14:45:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:42.376 14:45:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:08:42.376 14:45:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:42.376 14:45:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:42.376 14:45:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:42.376 14:45:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:42.376 14:45:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:42.376 14:45:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:08:42.376 14:45:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:42.376 14:45:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:08:42.376 14:45:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:08:42.376 14:45:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:08:42.376 14:45:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:08:42.376 14:45:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:08:42.376 14:45:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:08:42.376 14:45:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:42.376 14:45:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:42.377 14:45:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:42.377 14:45:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:42.377 14:45:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:42.377 14:45:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:42.377 14:45:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:42.377 14:45:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:42.377 14:45:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:42.377 14:45:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:42.377 14:45:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:42.377 14:45:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:42.377 14:45:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:42.377 14:45:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:42.377 14:45:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:42.377 14:45:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:42.377 14:45:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:42.377 14:45:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:42.377 14:45:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:42.377 14:45:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:42.377 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:42.377 14:45:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:42.377 14:45:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:42.377 14:45:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:42.377 14:45:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:42.377 14:45:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:42.377 14:45:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:42.377 14:45:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:42.377 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:42.377 14:45:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:42.377 14:45:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:42.377 14:45:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:42.377 14:45:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:42.377 14:45:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:42.377 14:45:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:42.377 14:45:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:42.377 14:45:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:42.377 14:45:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:42.377 14:45:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:42.377 14:45:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:42.377 14:45:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:42.377 14:45:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:42.377 14:45:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:42.377 14:45:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:42.377 14:45:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:42.377 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:42.377 14:45:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:42.377 14:45:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:42.377 14:45:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:42.377 14:45:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:42.377 14:45:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:42.377 14:45:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:42.377 14:45:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:42.377 14:45:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:42.377 14:45:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:42.377 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:42.377 14:45:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:42.377 14:45:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:42.377 14:45:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:08:42.377 14:45:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:42.377 14:45:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:42.377 14:45:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:42.377 14:45:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:42.377 14:45:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:42.377 14:45:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:42.377 14:45:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:42.377 14:45:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:42.377 14:45:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:42.377 14:45:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:42.377 14:45:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:42.377 14:45:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:42.377 14:45:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:42.377 14:45:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:42.377 14:45:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:42.377 14:45:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:42.377 14:45:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:42.377 14:45:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:42.377 14:45:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:42.377 14:45:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:42.377 14:45:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:42.377 14:45:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:42.377 14:45:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:42.377 14:45:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:42.377 14:45:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:42.377 14:45:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:42.377 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:42.377 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.289 ms 00:08:42.377 00:08:42.377 --- 10.0.0.2 ping statistics --- 00:08:42.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:42.377 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:08:42.377 14:45:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:42.377 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:42.377 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:08:42.377 00:08:42.377 --- 10.0.0.1 ping statistics --- 00:08:42.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:42.377 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:08:42.377 14:45:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:42.377 14:45:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:08:42.377 14:45:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:42.377 14:45:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:42.377 14:45:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:42.377 14:45:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:42.377 14:45:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:42.377 14:45:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:42.377 14:45:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:42.377 14:45:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:08:42.377 14:45:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:42.377 14:45:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:42.377 14:45:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:42.377 14:45:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=595129 00:08:42.377 14:45:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 595129 00:08:42.377 14:45:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:42.378 14:45:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 595129 ']' 00:08:42.378 14:45:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:42.378 14:45:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:42.378 14:45:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:42.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:42.378 14:45:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:42.378 14:45:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:42.378 [2024-12-11 14:45:25.141615] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:08:42.378 [2024-12-11 14:45:25.141695] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:42.635 [2024-12-11 14:45:25.218977] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:42.635 [2024-12-11 14:45:25.276261] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:42.635 [2024-12-11 14:45:25.276320] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:42.635 [2024-12-11 14:45:25.276348] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:42.635 [2024-12-11 14:45:25.276359] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:42.635 [2024-12-11 14:45:25.276368] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:42.635 [2024-12-11 14:45:25.278026] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:42.635 [2024-12-11 14:45:25.278088] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:08:42.635 [2024-12-11 14:45:25.278154] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:08:42.635 [2024-12-11 14:45:25.278157] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.635 14:45:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:42.635 14:45:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:08:42.635 14:45:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:42.635 14:45:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:42.635 14:45:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:42.894 14:45:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:42.894 14:45:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:42.894 14:45:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.894 14:45:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:42.894 [2024-12-11 14:45:25.421586] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:42.894 14:45:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.894 14:45:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:42.894 14:45:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.894 14:45:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:42.894 Malloc0 00:08:42.894 14:45:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.894 14:45:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:42.894 14:45:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.894 14:45:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:42.894 14:45:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.894 14:45:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:42.894 14:45:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.894 14:45:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:42.894 14:45:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.894 14:45:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:42.894 14:45:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.894 14:45:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:42.894 [2024-12-11 14:45:25.493175] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:42.894 14:45:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.894 14:45:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:08:42.894 test case1: single bdev can't be used in multiple subsystems 00:08:42.894 14:45:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:08:42.894 14:45:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.894 14:45:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:42.894 14:45:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.894 14:45:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:42.894 14:45:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.894 14:45:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:42.894 14:45:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.894 14:45:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:08:42.894 14:45:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:08:42.894 14:45:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.894 14:45:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:42.894 [2024-12-11 14:45:25.516978] bdev.c:8538:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:08:42.894 [2024-12-11 14:45:25.517005] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:08:42.894 [2024-12-11 14:45:25.517034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.894 request: 00:08:42.894 { 00:08:42.894 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:42.894 "namespace": { 00:08:42.894 "bdev_name": "Malloc0", 00:08:42.894 "no_auto_visible": false, 00:08:42.894 "hide_metadata": false 00:08:42.894 }, 00:08:42.894 "method": "nvmf_subsystem_add_ns", 00:08:42.894 "req_id": 1 00:08:42.894 } 00:08:42.894 Got JSON-RPC error response 00:08:42.894 response: 00:08:42.895 { 00:08:42.895 "code": -32602, 00:08:42.895 "message": "Invalid parameters" 00:08:42.895 } 00:08:42.895 14:45:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:42.895 14:45:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:08:42.895 14:45:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:08:42.895 14:45:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:08:42.895 Adding namespace failed - expected result. 00:08:42.895 14:45:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:08:42.895 test case2: host connect to nvmf target in multiple paths 00:08:42.895 14:45:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:08:42.895 14:45:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.895 14:45:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:42.895 [2024-12-11 14:45:25.525085] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:08:42.895 14:45:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.895 14:45:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:43.461 14:45:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:08:44.396 14:45:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:08:44.396 14:45:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:08:44.396 14:45:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:08:44.396 14:45:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:08:44.396 14:45:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:08:46.296 14:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:08:46.296 14:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:08:46.296 14:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:08:46.296 14:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:08:46.296 14:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:08:46.296 14:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:08:46.296 14:45:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:08:46.296 [global] 00:08:46.296 thread=1 00:08:46.296 invalidate=1 00:08:46.296 rw=write 00:08:46.296 time_based=1 00:08:46.296 runtime=1 00:08:46.296 ioengine=libaio 00:08:46.296 direct=1 00:08:46.296 bs=4096 00:08:46.296 iodepth=1 00:08:46.296 norandommap=0 00:08:46.296 numjobs=1 00:08:46.296 00:08:46.296 verify_dump=1 00:08:46.296 verify_backlog=512 00:08:46.296 verify_state_save=0 00:08:46.296 do_verify=1 00:08:46.296 verify=crc32c-intel 00:08:46.296 [job0] 00:08:46.296 filename=/dev/nvme0n1 00:08:46.296 Could not set queue depth (nvme0n1) 00:08:46.296 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:46.296 fio-3.35 00:08:46.296 Starting 1 thread 00:08:47.671 00:08:47.671 job0: (groupid=0, jobs=1): err= 0: pid=595764: Wed Dec 11 14:45:30 2024 00:08:47.671 read: IOPS=1048, BW=4192KiB/s (4293kB/s)(4364KiB/1041msec) 00:08:47.671 slat (nsec): min=4860, max=39039, avg=11998.52, stdev=3877.88 00:08:47.671 clat (usec): min=161, max=40998, avg=675.83, stdev=4250.34 00:08:47.671 lat (usec): min=168, max=41031, avg=687.82, stdev=4251.76 00:08:47.671 clat percentiles (usec): 00:08:47.671 | 1.00th=[ 172], 5.00th=[ 190], 10.00th=[ 198], 20.00th=[ 206], 00:08:47.671 | 30.00th=[ 212], 40.00th=[ 217], 50.00th=[ 225], 60.00th=[ 235], 00:08:47.671 | 70.00th=[ 241], 80.00th=[ 247], 90.00th=[ 258], 95.00th=[ 273], 00:08:47.671 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:08:47.671 | 99.99th=[41157] 00:08:47.671 write: IOPS=1475, BW=5902KiB/s (6044kB/s)(6144KiB/1041msec); 0 zone resets 00:08:47.671 slat (usec): min=6, max=27782, avg=32.82, stdev=708.51 00:08:47.671 clat (usec): min=116, max=265, avg=149.35, stdev=19.54 00:08:47.671 lat (usec): min=125, max=28043, avg=182.17, stdev=711.63 00:08:47.671 clat percentiles (usec): 00:08:47.671 | 1.00th=[ 123], 5.00th=[ 127], 10.00th=[ 131], 20.00th=[ 137], 00:08:47.671 | 30.00th=[ 139], 40.00th=[ 143], 50.00th=[ 145], 60.00th=[ 149], 00:08:47.671 | 70.00th=[ 153], 80.00th=[ 161], 90.00th=[ 176], 95.00th=[ 186], 00:08:47.671 | 99.00th=[ 231], 99.50th=[ 249], 99.90th=[ 262], 99.95th=[ 265], 00:08:47.671 | 99.99th=[ 265] 00:08:47.671 bw ( KiB/s): min= 1640, max=10648, per=100.00%, avg=6144.00, stdev=6369.62, samples=2 00:08:47.671 iops : min= 410, max= 2662, avg=1536.00, stdev=1592.40, samples=2 00:08:47.671 lat (usec) : 250=93.00%, 500=6.47%, 750=0.08% 00:08:47.671 lat (msec) : 50=0.46% 00:08:47.671 cpu : usr=1.63%, sys=3.75%, ctx=2629, majf=0, minf=1 00:08:47.671 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:47.671 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:47.671 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:47.671 issued rwts: total=1091,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:47.671 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:47.671 00:08:47.671 Run status group 0 (all jobs): 00:08:47.671 READ: bw=4192KiB/s (4293kB/s), 4192KiB/s-4192KiB/s (4293kB/s-4293kB/s), io=4364KiB (4469kB), run=1041-1041msec 00:08:47.671 WRITE: bw=5902KiB/s (6044kB/s), 5902KiB/s-5902KiB/s (6044kB/s-6044kB/s), io=6144KiB (6291kB), run=1041-1041msec 00:08:47.671 00:08:47.671 Disk stats (read/write): 00:08:47.671 nvme0n1: ios=1113/1536, merge=0/0, ticks=1553/223, in_queue=1776, util=98.50% 00:08:47.671 14:45:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:47.671 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:08:47.671 14:45:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:47.671 14:45:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:08:47.671 14:45:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:08:47.671 14:45:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:47.671 14:45:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:08:47.671 14:45:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:47.671 14:45:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:08:47.671 14:45:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:08:47.671 14:45:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:08:47.671 14:45:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:47.671 14:45:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:08:47.671 14:45:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:47.671 14:45:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:08:47.671 14:45:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:47.671 14:45:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:47.671 rmmod nvme_tcp 00:08:47.671 rmmod nvme_fabrics 00:08:47.671 rmmod nvme_keyring 00:08:47.671 14:45:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:47.671 14:45:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:08:47.671 14:45:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:08:47.671 14:45:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 595129 ']' 00:08:47.671 14:45:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 595129 00:08:47.671 14:45:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 595129 ']' 00:08:47.671 14:45:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 595129 00:08:47.671 14:45:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:08:47.671 14:45:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:47.671 14:45:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 595129 00:08:47.931 14:45:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:47.931 14:45:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:47.931 14:45:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 595129' 00:08:47.931 killing process with pid 595129 00:08:47.931 14:45:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 595129 00:08:47.931 14:45:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 595129 00:08:48.192 14:45:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:48.192 14:45:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:48.192 14:45:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:48.192 14:45:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:08:48.192 14:45:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:08:48.192 14:45:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:08:48.192 14:45:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:48.192 14:45:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:48.192 14:45:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:48.192 14:45:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:48.192 14:45:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:48.192 14:45:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:50.102 14:45:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:50.102 00:08:50.102 real 0m10.112s 00:08:50.102 user 0m22.741s 00:08:50.102 sys 0m2.536s 00:08:50.102 14:45:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:50.102 14:45:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:50.102 ************************************ 00:08:50.102 END TEST nvmf_nmic 00:08:50.102 ************************************ 00:08:50.102 14:45:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:08:50.102 14:45:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:50.102 14:45:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:50.102 14:45:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:50.102 ************************************ 00:08:50.102 START TEST nvmf_fio_target 00:08:50.102 ************************************ 00:08:50.102 14:45:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:08:50.102 * Looking for test storage... 00:08:50.361 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:50.361 14:45:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:50.361 14:45:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:08:50.361 14:45:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:50.361 14:45:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:50.361 14:45:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:50.361 14:45:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:50.361 14:45:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:50.361 14:45:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:08:50.361 14:45:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:08:50.361 14:45:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:08:50.361 14:45:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:08:50.361 14:45:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:08:50.361 14:45:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:08:50.361 14:45:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:08:50.361 14:45:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:50.361 14:45:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:08:50.361 14:45:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:08:50.361 14:45:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:50.361 14:45:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:50.361 14:45:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:08:50.361 14:45:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:08:50.361 14:45:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:50.361 14:45:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:08:50.361 14:45:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:08:50.361 14:45:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:08:50.361 14:45:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:08:50.361 14:45:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:50.361 14:45:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:08:50.361 14:45:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:08:50.361 14:45:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:50.361 14:45:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:50.361 14:45:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:08:50.361 14:45:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:50.361 14:45:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:50.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.361 --rc genhtml_branch_coverage=1 00:08:50.361 --rc genhtml_function_coverage=1 00:08:50.361 --rc genhtml_legend=1 00:08:50.361 --rc geninfo_all_blocks=1 00:08:50.361 --rc geninfo_unexecuted_blocks=1 00:08:50.361 00:08:50.361 ' 00:08:50.361 14:45:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:50.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.361 --rc genhtml_branch_coverage=1 00:08:50.361 --rc genhtml_function_coverage=1 00:08:50.361 --rc genhtml_legend=1 00:08:50.361 --rc geninfo_all_blocks=1 00:08:50.361 --rc geninfo_unexecuted_blocks=1 00:08:50.361 00:08:50.361 ' 00:08:50.361 14:45:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:50.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.361 --rc genhtml_branch_coverage=1 00:08:50.362 --rc genhtml_function_coverage=1 00:08:50.362 --rc genhtml_legend=1 00:08:50.362 --rc geninfo_all_blocks=1 00:08:50.362 --rc geninfo_unexecuted_blocks=1 00:08:50.362 00:08:50.362 ' 00:08:50.362 14:45:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:50.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.362 --rc genhtml_branch_coverage=1 00:08:50.362 --rc genhtml_function_coverage=1 00:08:50.362 --rc genhtml_legend=1 00:08:50.362 --rc geninfo_all_blocks=1 00:08:50.362 --rc geninfo_unexecuted_blocks=1 00:08:50.362 00:08:50.362 ' 00:08:50.362 14:45:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:50.362 14:45:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:08:50.362 14:45:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:50.362 14:45:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:50.362 14:45:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:50.362 14:45:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:50.362 14:45:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:50.362 14:45:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:50.362 14:45:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:50.362 14:45:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:50.362 14:45:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:50.362 14:45:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:50.362 14:45:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:50.362 14:45:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:50.362 14:45:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:50.362 14:45:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:50.362 14:45:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:50.362 14:45:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:50.362 14:45:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:50.362 14:45:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:08:50.362 14:45:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:50.362 14:45:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:50.362 14:45:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:50.362 14:45:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.362 14:45:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.362 14:45:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.362 14:45:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:08:50.362 14:45:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.362 14:45:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:08:50.362 14:45:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:50.362 14:45:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:50.362 14:45:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:50.362 14:45:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:50.362 14:45:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:50.362 14:45:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:50.362 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:50.362 14:45:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:50.362 14:45:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:50.362 14:45:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:50.362 14:45:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:50.362 14:45:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:50.362 14:45:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:50.362 14:45:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:08:50.362 14:45:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:50.362 14:45:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:50.362 14:45:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:50.362 14:45:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:50.362 14:45:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:50.362 14:45:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:50.362 14:45:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:50.362 14:45:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:50.362 14:45:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:50.362 14:45:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:50.362 14:45:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:08:50.362 14:45:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:52.900 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:52.900 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:08:52.900 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:52.900 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:52.900 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:52.900 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:52.900 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:52.900 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:08:52.900 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:52.900 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:08:52.900 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:08:52.900 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:08:52.900 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:08:52.900 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:08:52.900 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:08:52.900 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:52.900 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:52.900 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:52.900 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:52.900 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:52.900 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:52.900 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:52.900 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:52.900 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:52.900 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:52.900 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:52.900 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:52.900 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:52.900 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:52.900 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:52.900 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:52.900 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:52.900 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:52.900 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:52.900 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:52.900 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:52.900 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:52.900 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:52.900 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:52.900 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:52.900 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:52.900 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:52.900 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:52.900 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:52.900 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:52.900 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:52.900 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:52.900 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:52.900 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:52.900 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:52.900 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:52.900 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:52.900 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:52.900 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:52.900 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:52.900 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:52.900 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:52.900 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:52.900 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:52.900 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:52.900 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:52.900 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:52.900 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:52.900 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:52.900 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:52.900 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:52.900 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:52.900 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:52.900 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:52.900 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:52.900 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:52.900 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:52.900 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:52.900 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:08:52.900 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:52.900 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:52.900 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:52.900 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:52.900 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:52.900 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:52.900 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:52.900 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:52.900 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:52.900 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:52.900 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:52.900 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:52.900 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:52.900 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:52.900 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:52.900 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:52.900 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:52.900 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:52.900 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:52.901 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:52.901 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:52.901 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:52.901 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:52.901 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:52.901 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:52.901 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:52.901 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:52.901 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.144 ms 00:08:52.901 00:08:52.901 --- 10.0.0.2 ping statistics --- 00:08:52.901 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:52.901 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:08:52.901 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:52.901 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:52.901 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:08:52.901 00:08:52.901 --- 10.0.0.1 ping statistics --- 00:08:52.901 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:52.901 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:08:52.901 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:52.901 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:08:52.901 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:52.901 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:52.901 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:52.901 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:52.901 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:52.901 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:52.901 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:52.901 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:08:52.901 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:52.901 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:52.901 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:52.901 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=597856 00:08:52.901 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:52.901 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 597856 00:08:52.901 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 597856 ']' 00:08:52.901 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:52.901 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:52.901 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:52.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:52.901 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:52.901 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:52.901 [2024-12-11 14:45:35.431019] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:08:52.901 [2024-12-11 14:45:35.431101] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:52.901 [2024-12-11 14:45:35.505739] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:52.901 [2024-12-11 14:45:35.567058] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:52.901 [2024-12-11 14:45:35.567108] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:52.901 [2024-12-11 14:45:35.567135] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:52.901 [2024-12-11 14:45:35.567146] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:52.901 [2024-12-11 14:45:35.567155] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:52.901 [2024-12-11 14:45:35.568813] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:52.901 [2024-12-11 14:45:35.568852] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:08:52.901 [2024-12-11 14:45:35.568952] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:08:52.901 [2024-12-11 14:45:35.568958] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.159 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:53.159 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:08:53.159 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:53.159 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:53.159 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:53.159 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:53.159 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:53.417 [2024-12-11 14:45:35.958453] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:53.417 14:45:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:53.675 14:45:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:08:53.675 14:45:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:53.933 14:45:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:08:53.934 14:45:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:54.192 14:45:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:08:54.192 14:45:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:54.450 14:45:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:08:54.450 14:45:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:08:54.707 14:45:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:54.964 14:45:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:08:54.964 14:45:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:55.531 14:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:08:55.531 14:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:55.531 14:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:08:55.531 14:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:08:56.095 14:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:56.095 14:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:08:56.095 14:45:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:56.353 14:45:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:08:56.353 14:45:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:56.610 14:45:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:56.868 [2024-12-11 14:45:39.624389] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:57.126 14:45:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:08:57.383 14:45:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:08:57.641 14:45:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:58.206 14:45:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:08:58.206 14:45:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:08:58.206 14:45:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:08:58.206 14:45:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:08:58.206 14:45:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:08:58.206 14:45:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:09:00.104 14:45:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:00.104 14:45:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:00.104 14:45:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:00.104 14:45:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:09:00.104 14:45:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:00.104 14:45:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:09:00.104 14:45:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:00.104 [global] 00:09:00.104 thread=1 00:09:00.104 invalidate=1 00:09:00.104 rw=write 00:09:00.104 time_based=1 00:09:00.104 runtime=1 00:09:00.104 ioengine=libaio 00:09:00.104 direct=1 00:09:00.104 bs=4096 00:09:00.104 iodepth=1 00:09:00.104 norandommap=0 00:09:00.104 numjobs=1 00:09:00.104 00:09:00.104 verify_dump=1 00:09:00.104 verify_backlog=512 00:09:00.104 verify_state_save=0 00:09:00.104 do_verify=1 00:09:00.104 verify=crc32c-intel 00:09:00.104 [job0] 00:09:00.104 filename=/dev/nvme0n1 00:09:00.104 [job1] 00:09:00.104 filename=/dev/nvme0n2 00:09:00.104 [job2] 00:09:00.104 filename=/dev/nvme0n3 00:09:00.104 [job3] 00:09:00.104 filename=/dev/nvme0n4 00:09:00.104 Could not set queue depth (nvme0n1) 00:09:00.104 Could not set queue depth (nvme0n2) 00:09:00.104 Could not set queue depth (nvme0n3) 00:09:00.104 Could not set queue depth (nvme0n4) 00:09:00.362 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:00.362 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:00.362 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:00.362 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:00.362 fio-3.35 00:09:00.362 Starting 4 threads 00:09:01.734 00:09:01.734 job0: (groupid=0, jobs=1): err= 0: pid=598936: Wed Dec 11 14:45:44 2024 00:09:01.734 read: IOPS=2060, BW=8244KiB/s (8442kB/s)(8252KiB/1001msec) 00:09:01.734 slat (nsec): min=5226, max=59191, avg=10676.02, stdev=6024.82 00:09:01.734 clat (usec): min=175, max=743, avg=249.71, stdev=50.60 00:09:01.734 lat (usec): min=180, max=748, avg=260.39, stdev=51.15 00:09:01.734 clat percentiles (usec): 00:09:01.734 | 1.00th=[ 190], 5.00th=[ 206], 10.00th=[ 212], 20.00th=[ 221], 00:09:01.734 | 30.00th=[ 227], 40.00th=[ 233], 50.00th=[ 241], 60.00th=[ 247], 00:09:01.734 | 70.00th=[ 258], 80.00th=[ 273], 90.00th=[ 285], 95.00th=[ 302], 00:09:01.734 | 99.00th=[ 519], 99.50th=[ 603], 99.90th=[ 652], 99.95th=[ 685], 00:09:01.734 | 99.99th=[ 742] 00:09:01.734 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:09:01.734 slat (nsec): min=6537, max=63641, avg=14132.42, stdev=7293.47 00:09:01.734 clat (usec): min=118, max=558, avg=160.30, stdev=22.65 00:09:01.734 lat (usec): min=126, max=577, avg=174.43, stdev=26.71 00:09:01.734 clat percentiles (usec): 00:09:01.734 | 1.00th=[ 124], 5.00th=[ 129], 10.00th=[ 133], 20.00th=[ 139], 00:09:01.734 | 30.00th=[ 149], 40.00th=[ 155], 50.00th=[ 161], 60.00th=[ 165], 00:09:01.734 | 70.00th=[ 172], 80.00th=[ 178], 90.00th=[ 188], 95.00th=[ 194], 00:09:01.734 | 99.00th=[ 215], 99.50th=[ 229], 99.90th=[ 273], 99.95th=[ 293], 00:09:01.734 | 99.99th=[ 562] 00:09:01.734 bw ( KiB/s): min= 9920, max= 9920, per=62.48%, avg=9920.00, stdev= 0.00, samples=1 00:09:01.734 iops : min= 2480, max= 2480, avg=2480.00, stdev= 0.00, samples=1 00:09:01.734 lat (usec) : 250=83.04%, 500=16.46%, 750=0.50% 00:09:01.734 cpu : usr=4.90%, sys=7.50%, ctx=4623, majf=0, minf=2 00:09:01.734 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:01.734 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:01.734 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:01.734 issued rwts: total=2063,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:01.734 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:01.734 job1: (groupid=0, jobs=1): err= 0: pid=598937: Wed Dec 11 14:45:44 2024 00:09:01.734 read: IOPS=21, BW=85.3KiB/s (87.3kB/s)(88.0KiB/1032msec) 00:09:01.734 slat (nsec): min=8244, max=14201, avg=13252.23, stdev=1201.88 00:09:01.734 clat (usec): min=40661, max=42086, avg=41197.30, stdev=445.20 00:09:01.734 lat (usec): min=40669, max=42100, avg=41210.55, stdev=445.60 00:09:01.734 clat percentiles (usec): 00:09:01.734 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:09:01.734 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:01.734 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:09:01.734 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:01.734 | 99.99th=[42206] 00:09:01.734 write: IOPS=496, BW=1984KiB/s (2032kB/s)(2048KiB/1032msec); 0 zone resets 00:09:01.734 slat (nsec): min=6209, max=51998, avg=13234.87, stdev=5890.07 00:09:01.734 clat (usec): min=175, max=394, avg=228.94, stdev=20.82 00:09:01.734 lat (usec): min=195, max=418, avg=242.17, stdev=20.66 00:09:01.734 clat percentiles (usec): 00:09:01.734 | 1.00th=[ 196], 5.00th=[ 204], 10.00th=[ 208], 20.00th=[ 215], 00:09:01.735 | 30.00th=[ 219], 40.00th=[ 223], 50.00th=[ 227], 60.00th=[ 231], 00:09:01.735 | 70.00th=[ 237], 80.00th=[ 243], 90.00th=[ 251], 95.00th=[ 260], 00:09:01.735 | 99.00th=[ 285], 99.50th=[ 359], 99.90th=[ 396], 99.95th=[ 396], 00:09:01.735 | 99.99th=[ 396] 00:09:01.735 bw ( KiB/s): min= 4096, max= 4096, per=25.80%, avg=4096.00, stdev= 0.00, samples=1 00:09:01.735 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:01.735 lat (usec) : 250=85.21%, 500=10.67% 00:09:01.735 lat (msec) : 50=4.12% 00:09:01.735 cpu : usr=0.19%, sys=0.68%, ctx=534, majf=0, minf=1 00:09:01.735 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:01.735 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:01.735 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:01.735 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:01.735 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:01.735 job2: (groupid=0, jobs=1): err= 0: pid=598938: Wed Dec 11 14:45:44 2024 00:09:01.735 read: IOPS=203, BW=815KiB/s (835kB/s)(816KiB/1001msec) 00:09:01.735 slat (nsec): min=5770, max=30915, avg=8315.40, stdev=4305.06 00:09:01.735 clat (usec): min=302, max=42026, avg=4279.92, stdev=11858.73 00:09:01.735 lat (usec): min=313, max=42053, avg=4288.23, stdev=11860.78 00:09:01.735 clat percentiles (usec): 00:09:01.735 | 1.00th=[ 314], 5.00th=[ 359], 10.00th=[ 461], 20.00th=[ 482], 00:09:01.735 | 30.00th=[ 494], 40.00th=[ 498], 50.00th=[ 502], 60.00th=[ 506], 00:09:01.735 | 70.00th=[ 510], 80.00th=[ 519], 90.00th=[ 578], 95.00th=[41157], 00:09:01.735 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:01.735 | 99.99th=[42206] 00:09:01.735 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:09:01.735 slat (nsec): min=7756, max=57653, avg=16787.12, stdev=7905.62 00:09:01.735 clat (usec): min=165, max=333, avg=223.65, stdev=22.01 00:09:01.735 lat (usec): min=184, max=361, avg=240.44, stdev=19.69 00:09:01.735 clat percentiles (usec): 00:09:01.735 | 1.00th=[ 182], 5.00th=[ 196], 10.00th=[ 200], 20.00th=[ 206], 00:09:01.735 | 30.00th=[ 210], 40.00th=[ 215], 50.00th=[ 219], 60.00th=[ 225], 00:09:01.735 | 70.00th=[ 235], 80.00th=[ 245], 90.00th=[ 258], 95.00th=[ 262], 00:09:01.735 | 99.00th=[ 273], 99.50th=[ 281], 99.90th=[ 334], 99.95th=[ 334], 00:09:01.735 | 99.99th=[ 334] 00:09:01.735 bw ( KiB/s): min= 4096, max= 4096, per=25.80%, avg=4096.00, stdev= 0.00, samples=1 00:09:01.735 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:01.735 lat (usec) : 250=60.61%, 500=23.18%, 750=13.55% 00:09:01.735 lat (msec) : 50=2.65% 00:09:01.735 cpu : usr=1.10%, sys=0.80%, ctx=716, majf=0, minf=1 00:09:01.735 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:01.735 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:01.735 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:01.735 issued rwts: total=204,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:01.735 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:01.735 job3: (groupid=0, jobs=1): err= 0: pid=598940: Wed Dec 11 14:45:44 2024 00:09:01.735 read: IOPS=88, BW=355KiB/s (364kB/s)(360KiB/1013msec) 00:09:01.735 slat (nsec): min=4703, max=30750, avg=10742.61, stdev=4545.17 00:09:01.735 clat (usec): min=212, max=42068, avg=9945.07, stdev=17436.68 00:09:01.735 lat (usec): min=217, max=42083, avg=9955.81, stdev=17438.49 00:09:01.735 clat percentiles (usec): 00:09:01.735 | 1.00th=[ 212], 5.00th=[ 253], 10.00th=[ 269], 20.00th=[ 306], 00:09:01.735 | 30.00th=[ 343], 40.00th=[ 375], 50.00th=[ 408], 60.00th=[ 465], 00:09:01.735 | 70.00th=[ 506], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:09:01.735 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:01.735 | 99.99th=[42206] 00:09:01.735 write: IOPS=505, BW=2022KiB/s (2070kB/s)(2048KiB/1013msec); 0 zone resets 00:09:01.735 slat (nsec): min=6399, max=43112, avg=12905.59, stdev=5498.36 00:09:01.735 clat (usec): min=144, max=299, avg=212.33, stdev=25.50 00:09:01.735 lat (usec): min=151, max=316, avg=225.23, stdev=26.66 00:09:01.735 clat percentiles (usec): 00:09:01.735 | 1.00th=[ 159], 5.00th=[ 169], 10.00th=[ 180], 20.00th=[ 190], 00:09:01.735 | 30.00th=[ 198], 40.00th=[ 206], 50.00th=[ 215], 60.00th=[ 221], 00:09:01.735 | 70.00th=[ 227], 80.00th=[ 235], 90.00th=[ 245], 95.00th=[ 253], 00:09:01.735 | 99.00th=[ 273], 99.50th=[ 289], 99.90th=[ 302], 99.95th=[ 302], 00:09:01.735 | 99.99th=[ 302] 00:09:01.735 bw ( KiB/s): min= 4096, max= 4096, per=25.80%, avg=4096.00, stdev= 0.00, samples=1 00:09:01.735 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:01.735 lat (usec) : 250=80.73%, 500=14.29%, 750=1.50% 00:09:01.735 lat (msec) : 50=3.49% 00:09:01.735 cpu : usr=0.59%, sys=0.49%, ctx=602, majf=0, minf=1 00:09:01.735 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:01.735 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:01.735 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:01.735 issued rwts: total=90,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:01.735 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:01.735 00:09:01.735 Run status group 0 (all jobs): 00:09:01.735 READ: bw=9221KiB/s (9442kB/s), 85.3KiB/s-8244KiB/s (87.3kB/s-8442kB/s), io=9516KiB (9744kB), run=1001-1032msec 00:09:01.735 WRITE: bw=15.5MiB/s (16.3MB/s), 1984KiB/s-9.99MiB/s (2032kB/s-10.5MB/s), io=16.0MiB (16.8MB), run=1001-1032msec 00:09:01.735 00:09:01.735 Disk stats (read/write): 00:09:01.735 nvme0n1: ios=1855/2048, merge=0/0, ticks=460/320, in_queue=780, util=87.17% 00:09:01.735 nvme0n2: ios=22/512, merge=0/0, ticks=706/113, in_queue=819, util=86.47% 00:09:01.735 nvme0n3: ios=17/512, merge=0/0, ticks=701/107, in_queue=808, util=88.90% 00:09:01.735 nvme0n4: ios=46/512, merge=0/0, ticks=773/100, in_queue=873, util=91.03% 00:09:01.735 14:45:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:01.735 [global] 00:09:01.735 thread=1 00:09:01.735 invalidate=1 00:09:01.735 rw=randwrite 00:09:01.735 time_based=1 00:09:01.735 runtime=1 00:09:01.735 ioengine=libaio 00:09:01.735 direct=1 00:09:01.735 bs=4096 00:09:01.735 iodepth=1 00:09:01.735 norandommap=0 00:09:01.735 numjobs=1 00:09:01.735 00:09:01.735 verify_dump=1 00:09:01.735 verify_backlog=512 00:09:01.735 verify_state_save=0 00:09:01.735 do_verify=1 00:09:01.735 verify=crc32c-intel 00:09:01.735 [job0] 00:09:01.735 filename=/dev/nvme0n1 00:09:01.735 [job1] 00:09:01.735 filename=/dev/nvme0n2 00:09:01.735 [job2] 00:09:01.735 filename=/dev/nvme0n3 00:09:01.735 [job3] 00:09:01.735 filename=/dev/nvme0n4 00:09:01.735 Could not set queue depth (nvme0n1) 00:09:01.735 Could not set queue depth (nvme0n2) 00:09:01.735 Could not set queue depth (nvme0n3) 00:09:01.735 Could not set queue depth (nvme0n4) 00:09:01.993 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:01.993 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:01.993 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:01.993 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:01.993 fio-3.35 00:09:01.993 Starting 4 threads 00:09:03.365 00:09:03.365 job0: (groupid=0, jobs=1): err= 0: pid=599271: Wed Dec 11 14:45:45 2024 00:09:03.365 read: IOPS=1919, BW=7676KiB/s (7861kB/s)(7684KiB/1001msec) 00:09:03.365 slat (nsec): min=5597, max=36530, avg=12407.84, stdev=5132.02 00:09:03.365 clat (usec): min=185, max=40512, avg=286.27, stdev=920.85 00:09:03.365 lat (usec): min=194, max=40522, avg=298.68, stdev=920.70 00:09:03.365 clat percentiles (usec): 00:09:03.365 | 1.00th=[ 206], 5.00th=[ 215], 10.00th=[ 221], 20.00th=[ 227], 00:09:03.365 | 30.00th=[ 233], 40.00th=[ 237], 50.00th=[ 241], 60.00th=[ 247], 00:09:03.365 | 70.00th=[ 255], 80.00th=[ 277], 90.00th=[ 388], 95.00th=[ 400], 00:09:03.365 | 99.00th=[ 445], 99.50th=[ 474], 99.90th=[ 1139], 99.95th=[40633], 00:09:03.365 | 99.99th=[40633] 00:09:03.365 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:09:03.365 slat (nsec): min=7199, max=52642, avg=17041.44, stdev=7130.27 00:09:03.365 clat (usec): min=124, max=528, avg=182.80, stdev=23.69 00:09:03.365 lat (usec): min=131, max=557, avg=199.84, stdev=26.35 00:09:03.365 clat percentiles (usec): 00:09:03.365 | 1.00th=[ 137], 5.00th=[ 149], 10.00th=[ 157], 20.00th=[ 165], 00:09:03.365 | 30.00th=[ 174], 40.00th=[ 178], 50.00th=[ 182], 60.00th=[ 186], 00:09:03.365 | 70.00th=[ 192], 80.00th=[ 198], 90.00th=[ 210], 95.00th=[ 223], 00:09:03.365 | 99.00th=[ 253], 99.50th=[ 262], 99.90th=[ 289], 99.95th=[ 297], 00:09:03.365 | 99.99th=[ 529] 00:09:03.365 bw ( KiB/s): min= 8464, max= 8464, per=42.07%, avg=8464.00, stdev= 0.00, samples=1 00:09:03.365 iops : min= 2116, max= 2116, avg=2116.00, stdev= 0.00, samples=1 00:09:03.365 lat (usec) : 250=82.41%, 500=17.36%, 750=0.10%, 1000=0.08% 00:09:03.365 lat (msec) : 2=0.03%, 50=0.03% 00:09:03.365 cpu : usr=4.70%, sys=7.70%, ctx=3970, majf=0, minf=1 00:09:03.365 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:03.365 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:03.365 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:03.365 issued rwts: total=1921,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:03.365 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:03.365 job1: (groupid=0, jobs=1): err= 0: pid=599283: Wed Dec 11 14:45:45 2024 00:09:03.365 read: IOPS=932, BW=3729KiB/s (3818kB/s)(3796KiB/1018msec) 00:09:03.365 slat (nsec): min=5500, max=56482, avg=14237.75, stdev=6667.72 00:09:03.365 clat (usec): min=182, max=42071, avg=786.37, stdev=4602.92 00:09:03.365 lat (usec): min=188, max=42088, avg=800.61, stdev=4602.83 00:09:03.365 clat percentiles (usec): 00:09:03.365 | 1.00th=[ 196], 5.00th=[ 217], 10.00th=[ 225], 20.00th=[ 231], 00:09:03.365 | 30.00th=[ 237], 40.00th=[ 243], 50.00th=[ 245], 60.00th=[ 249], 00:09:03.365 | 70.00th=[ 255], 80.00th=[ 269], 90.00th=[ 367], 95.00th=[ 478], 00:09:03.365 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:09:03.365 | 99.99th=[42206] 00:09:03.365 write: IOPS=1005, BW=4024KiB/s (4120kB/s)(4096KiB/1018msec); 0 zone resets 00:09:03.365 slat (nsec): min=7376, max=43028, avg=19628.71, stdev=6942.57 00:09:03.365 clat (usec): min=156, max=401, avg=222.05, stdev=34.29 00:09:03.365 lat (usec): min=176, max=411, avg=241.68, stdev=32.33 00:09:03.365 clat percentiles (usec): 00:09:03.365 | 1.00th=[ 165], 5.00th=[ 176], 10.00th=[ 180], 20.00th=[ 190], 00:09:03.365 | 30.00th=[ 200], 40.00th=[ 212], 50.00th=[ 221], 60.00th=[ 229], 00:09:03.365 | 70.00th=[ 237], 80.00th=[ 251], 90.00th=[ 265], 95.00th=[ 281], 00:09:03.365 | 99.00th=[ 310], 99.50th=[ 330], 99.90th=[ 375], 99.95th=[ 404], 00:09:03.365 | 99.99th=[ 404] 00:09:03.365 bw ( KiB/s): min= 712, max= 7480, per=20.36%, avg=4096.00, stdev=4785.70, samples=2 00:09:03.365 iops : min= 178, max= 1870, avg=1024.00, stdev=1196.42, samples=2 00:09:03.365 lat (usec) : 250=71.26%, 500=26.71%, 750=1.42% 00:09:03.365 lat (msec) : 50=0.61% 00:09:03.365 cpu : usr=2.85%, sys=4.13%, ctx=1973, majf=0, minf=1 00:09:03.365 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:03.365 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:03.366 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:03.366 issued rwts: total=949,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:03.366 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:03.366 job2: (groupid=0, jobs=1): err= 0: pid=599284: Wed Dec 11 14:45:45 2024 00:09:03.366 read: IOPS=1275, BW=5103KiB/s (5225kB/s)(5108KiB/1001msec) 00:09:03.366 slat (nsec): min=5574, max=46373, avg=11048.99, stdev=5626.05 00:09:03.366 clat (usec): min=182, max=42107, avg=500.86, stdev=3271.12 00:09:03.366 lat (usec): min=188, max=42124, avg=511.90, stdev=3271.46 00:09:03.366 clat percentiles (usec): 00:09:03.366 | 1.00th=[ 190], 5.00th=[ 200], 10.00th=[ 204], 20.00th=[ 210], 00:09:03.366 | 30.00th=[ 219], 40.00th=[ 225], 50.00th=[ 231], 60.00th=[ 239], 00:09:03.366 | 70.00th=[ 247], 80.00th=[ 255], 90.00th=[ 281], 95.00th=[ 351], 00:09:03.366 | 99.00th=[ 510], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:09:03.366 | 99.99th=[42206] 00:09:03.366 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:09:03.366 slat (nsec): min=7230, max=53594, avg=14373.88, stdev=7000.69 00:09:03.366 clat (usec): min=136, max=383, avg=203.95, stdev=41.54 00:09:03.366 lat (usec): min=143, max=405, avg=218.32, stdev=45.20 00:09:03.366 clat percentiles (usec): 00:09:03.366 | 1.00th=[ 143], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 159], 00:09:03.366 | 30.00th=[ 172], 40.00th=[ 188], 50.00th=[ 208], 60.00th=[ 221], 00:09:03.366 | 70.00th=[ 229], 80.00th=[ 237], 90.00th=[ 253], 95.00th=[ 269], 00:09:03.366 | 99.00th=[ 318], 99.50th=[ 338], 99.90th=[ 383], 99.95th=[ 383], 00:09:03.366 | 99.99th=[ 383] 00:09:03.366 bw ( KiB/s): min= 4096, max= 4096, per=20.36%, avg=4096.00, stdev= 0.00, samples=1 00:09:03.366 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:03.366 lat (usec) : 250=82.01%, 500=17.42%, 750=0.28% 00:09:03.366 lat (msec) : 50=0.28% 00:09:03.366 cpu : usr=3.20%, sys=4.50%, ctx=2813, majf=0, minf=2 00:09:03.366 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:03.366 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:03.366 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:03.366 issued rwts: total=1277,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:03.366 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:03.366 job3: (groupid=0, jobs=1): err= 0: pid=599285: Wed Dec 11 14:45:45 2024 00:09:03.366 read: IOPS=24, BW=98.6KiB/s (101kB/s)(100KiB/1014msec) 00:09:03.366 slat (nsec): min=10832, max=35256, avg=22901.40, stdev=9275.00 00:09:03.366 clat (usec): min=295, max=42149, avg=35058.32, stdev=15441.04 00:09:03.366 lat (usec): min=314, max=42164, avg=35081.22, stdev=15441.73 00:09:03.366 clat percentiles (usec): 00:09:03.366 | 1.00th=[ 297], 5.00th=[ 375], 10.00th=[ 429], 20.00th=[40633], 00:09:03.366 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[42206], 00:09:03.366 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:09:03.366 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:03.366 | 99.99th=[42206] 00:09:03.366 write: IOPS=504, BW=2020KiB/s (2068kB/s)(2048KiB/1014msec); 0 zone resets 00:09:03.366 slat (nsec): min=6832, max=55831, avg=15556.00, stdev=6520.31 00:09:03.366 clat (usec): min=159, max=429, avg=247.38, stdev=34.02 00:09:03.366 lat (usec): min=167, max=453, avg=262.94, stdev=33.81 00:09:03.366 clat percentiles (usec): 00:09:03.366 | 1.00th=[ 186], 5.00th=[ 206], 10.00th=[ 217], 20.00th=[ 223], 00:09:03.366 | 30.00th=[ 231], 40.00th=[ 235], 50.00th=[ 243], 60.00th=[ 249], 00:09:03.366 | 70.00th=[ 258], 80.00th=[ 265], 90.00th=[ 281], 95.00th=[ 314], 00:09:03.366 | 99.00th=[ 379], 99.50th=[ 408], 99.90th=[ 429], 99.95th=[ 429], 00:09:03.366 | 99.99th=[ 429] 00:09:03.366 bw ( KiB/s): min= 4096, max= 4096, per=20.36%, avg=4096.00, stdev= 0.00, samples=1 00:09:03.366 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:03.366 lat (usec) : 250=58.66%, 500=37.24%, 750=0.19% 00:09:03.366 lat (msec) : 50=3.91% 00:09:03.366 cpu : usr=0.39%, sys=0.79%, ctx=540, majf=0, minf=1 00:09:03.366 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:03.366 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:03.366 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:03.366 issued rwts: total=25,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:03.366 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:03.366 00:09:03.366 Run status group 0 (all jobs): 00:09:03.366 READ: bw=16.0MiB/s (16.8MB/s), 98.6KiB/s-7676KiB/s (101kB/s-7861kB/s), io=16.3MiB (17.1MB), run=1001-1018msec 00:09:03.366 WRITE: bw=19.6MiB/s (20.6MB/s), 2020KiB/s-8184KiB/s (2068kB/s-8380kB/s), io=20.0MiB (21.0MB), run=1001-1018msec 00:09:03.366 00:09:03.366 Disk stats (read/write): 00:09:03.366 nvme0n1: ios=1561/1995, merge=0/0, ticks=1400/342, in_queue=1742, util=98.40% 00:09:03.366 nvme0n2: ios=832/1024, merge=0/0, ticks=590/212, in_queue=802, util=88.12% 00:09:03.366 nvme0n3: ios=1081/1286, merge=0/0, ticks=600/258, in_queue=858, util=91.15% 00:09:03.366 nvme0n4: ios=78/512, merge=0/0, ticks=972/124, in_queue=1096, util=98.11% 00:09:03.366 14:45:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:03.366 [global] 00:09:03.366 thread=1 00:09:03.366 invalidate=1 00:09:03.366 rw=write 00:09:03.366 time_based=1 00:09:03.366 runtime=1 00:09:03.366 ioengine=libaio 00:09:03.366 direct=1 00:09:03.366 bs=4096 00:09:03.366 iodepth=128 00:09:03.366 norandommap=0 00:09:03.366 numjobs=1 00:09:03.366 00:09:03.366 verify_dump=1 00:09:03.366 verify_backlog=512 00:09:03.366 verify_state_save=0 00:09:03.366 do_verify=1 00:09:03.366 verify=crc32c-intel 00:09:03.366 [job0] 00:09:03.366 filename=/dev/nvme0n1 00:09:03.366 [job1] 00:09:03.366 filename=/dev/nvme0n2 00:09:03.366 [job2] 00:09:03.366 filename=/dev/nvme0n3 00:09:03.366 [job3] 00:09:03.366 filename=/dev/nvme0n4 00:09:03.366 Could not set queue depth (nvme0n1) 00:09:03.366 Could not set queue depth (nvme0n2) 00:09:03.366 Could not set queue depth (nvme0n3) 00:09:03.366 Could not set queue depth (nvme0n4) 00:09:03.366 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:03.366 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:03.366 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:03.366 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:03.366 fio-3.35 00:09:03.366 Starting 4 threads 00:09:04.808 00:09:04.808 job0: (groupid=0, jobs=1): err= 0: pid=599511: Wed Dec 11 14:45:47 2024 00:09:04.808 read: IOPS=4895, BW=19.1MiB/s (20.1MB/s)(19.2MiB/1002msec) 00:09:04.808 slat (usec): min=2, max=7047, avg=93.35, stdev=505.75 00:09:04.808 clat (usec): min=551, max=27999, avg=11941.55, stdev=2359.81 00:09:04.808 lat (usec): min=2894, max=28044, avg=12034.90, stdev=2399.77 00:09:04.808 clat percentiles (usec): 00:09:04.808 | 1.00th=[ 6194], 5.00th=[ 9241], 10.00th=[10028], 20.00th=[10552], 00:09:04.808 | 30.00th=[10945], 40.00th=[11207], 50.00th=[11600], 60.00th=[11994], 00:09:04.808 | 70.00th=[12256], 80.00th=[12649], 90.00th=[14746], 95.00th=[16450], 00:09:04.808 | 99.00th=[20841], 99.50th=[22152], 99.90th=[25560], 99.95th=[25560], 00:09:04.808 | 99.99th=[27919] 00:09:04.808 write: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec); 0 zone resets 00:09:04.808 slat (usec): min=4, max=7964, avg=95.59, stdev=480.87 00:09:04.808 clat (usec): min=6064, max=39053, avg=13264.40, stdev=5422.47 00:09:04.808 lat (usec): min=6071, max=39072, avg=13359.99, stdev=5458.99 00:09:04.808 clat percentiles (usec): 00:09:04.808 | 1.00th=[ 7046], 5.00th=[ 8717], 10.00th=[ 9634], 20.00th=[10159], 00:09:04.808 | 30.00th=[10552], 40.00th=[10814], 50.00th=[11469], 60.00th=[12125], 00:09:04.808 | 70.00th=[13173], 80.00th=[14222], 90.00th=[20841], 95.00th=[25560], 00:09:04.808 | 99.00th=[34866], 99.50th=[38536], 99.90th=[39060], 99.95th=[39060], 00:09:04.808 | 99.99th=[39060] 00:09:04.808 bw ( KiB/s): min=19960, max=19960, per=32.65%, avg=19960.00, stdev= 0.00, samples=1 00:09:04.808 iops : min= 4990, max= 4990, avg=4990.00, stdev= 0.00, samples=1 00:09:04.808 lat (usec) : 750=0.01% 00:09:04.808 lat (msec) : 4=0.42%, 10=12.30%, 20=80.70%, 50=6.57% 00:09:04.808 cpu : usr=6.89%, sys=10.89%, ctx=483, majf=0, minf=1 00:09:04.808 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:09:04.808 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:04.808 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:04.808 issued rwts: total=4905,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:04.808 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:04.808 job1: (groupid=0, jobs=1): err= 0: pid=599512: Wed Dec 11 14:45:47 2024 00:09:04.808 read: IOPS=3559, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1007msec) 00:09:04.808 slat (usec): min=2, max=26626, avg=126.82, stdev=922.62 00:09:04.808 clat (usec): min=7265, max=67608, avg=15487.01, stdev=8761.67 00:09:04.808 lat (usec): min=8609, max=67632, avg=15613.83, stdev=8828.25 00:09:04.808 clat percentiles (usec): 00:09:04.808 | 1.00th=[ 8848], 5.00th=[ 9896], 10.00th=[10552], 20.00th=[11207], 00:09:04.808 | 30.00th=[11338], 40.00th=[11469], 50.00th=[11600], 60.00th=[11994], 00:09:04.808 | 70.00th=[15008], 80.00th=[19792], 90.00th=[22414], 95.00th=[33817], 00:09:04.808 | 99.00th=[65799], 99.50th=[67634], 99.90th=[67634], 99.95th=[67634], 00:09:04.808 | 99.99th=[67634] 00:09:04.808 write: IOPS=4004, BW=15.6MiB/s (16.4MB/s)(15.8MiB/1007msec); 0 zone resets 00:09:04.808 slat (usec): min=2, max=30439, avg=130.54, stdev=884.30 00:09:04.808 clat (usec): min=1668, max=77860, avg=17754.26, stdev=13232.00 00:09:04.808 lat (usec): min=6693, max=77871, avg=17884.80, stdev=13301.41 00:09:04.808 clat percentiles (usec): 00:09:04.808 | 1.00th=[ 7898], 5.00th=[ 8979], 10.00th=[ 9634], 20.00th=[11076], 00:09:04.808 | 30.00th=[11338], 40.00th=[11600], 50.00th=[12125], 60.00th=[14353], 00:09:04.808 | 70.00th=[17695], 80.00th=[22938], 90.00th=[28705], 95.00th=[52167], 00:09:04.808 | 99.00th=[73925], 99.50th=[73925], 99.90th=[78119], 99.95th=[78119], 00:09:04.808 | 99.99th=[78119] 00:09:04.808 bw ( KiB/s): min=13976, max=17264, per=25.55%, avg=15620.00, stdev=2324.97, samples=2 00:09:04.808 iops : min= 3494, max= 4316, avg=3905.00, stdev=581.24, samples=2 00:09:04.808 lat (msec) : 2=0.01%, 10=8.09%, 20=69.63%, 50=18.94%, 100=3.32% 00:09:04.808 cpu : usr=2.78%, sys=4.47%, ctx=330, majf=0, minf=2 00:09:04.808 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:04.808 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:04.808 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:04.808 issued rwts: total=3584,4033,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:04.808 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:04.808 job2: (groupid=0, jobs=1): err= 0: pid=599513: Wed Dec 11 14:45:47 2024 00:09:04.808 read: IOPS=2912, BW=11.4MiB/s (11.9MB/s)(11.9MiB/1049msec) 00:09:04.808 slat (usec): min=3, max=16092, avg=160.28, stdev=1014.77 00:09:04.808 clat (usec): min=8122, max=68249, avg=21694.90, stdev=11563.74 00:09:04.808 lat (usec): min=8141, max=74884, avg=21855.17, stdev=11641.24 00:09:04.808 clat percentiles (usec): 00:09:04.808 | 1.00th=[12256], 5.00th=[13042], 10.00th=[13304], 20.00th=[13566], 00:09:04.808 | 30.00th=[14091], 40.00th=[15401], 50.00th=[15664], 60.00th=[17171], 00:09:04.808 | 70.00th=[25035], 80.00th=[29492], 90.00th=[37487], 95.00th=[49021], 00:09:04.808 | 99.00th=[67634], 99.50th=[67634], 99.90th=[67634], 99.95th=[68682], 00:09:04.808 | 99.99th=[68682] 00:09:04.808 write: IOPS=2928, BW=11.4MiB/s (12.0MB/s)(12.0MiB/1049msec); 0 zone resets 00:09:04.808 slat (usec): min=5, max=13977, avg=156.88, stdev=967.06 00:09:04.808 clat (usec): min=5041, max=68742, avg=21599.21, stdev=11813.68 00:09:04.808 lat (usec): min=5052, max=68763, avg=21756.09, stdev=11899.31 00:09:04.808 clat percentiles (usec): 00:09:04.808 | 1.00th=[ 8225], 5.00th=[10028], 10.00th=[11731], 20.00th=[12256], 00:09:04.808 | 30.00th=[14091], 40.00th=[15008], 50.00th=[19006], 60.00th=[22676], 00:09:04.808 | 70.00th=[23987], 80.00th=[27657], 90.00th=[34866], 95.00th=[45351], 00:09:04.808 | 99.00th=[68682], 99.50th=[68682], 99.90th=[68682], 99.95th=[68682], 00:09:04.808 | 99.99th=[68682] 00:09:04.808 bw ( KiB/s): min=11016, max=13560, per=20.10%, avg=12288.00, stdev=1798.88, samples=2 00:09:04.808 iops : min= 2754, max= 3390, avg=3072.00, stdev=449.72, samples=2 00:09:04.808 lat (msec) : 10=2.66%, 20=55.51%, 50=38.18%, 100=3.66% 00:09:04.808 cpu : usr=4.87%, sys=5.73%, ctx=233, majf=0, minf=1 00:09:04.808 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:09:04.808 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:04.808 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:04.808 issued rwts: total=3055,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:04.808 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:04.808 job3: (groupid=0, jobs=1): err= 0: pid=599514: Wed Dec 11 14:45:47 2024 00:09:04.808 read: IOPS=3548, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1010msec) 00:09:04.808 slat (usec): min=3, max=17020, avg=121.34, stdev=913.98 00:09:04.808 clat (usec): min=6297, max=47054, avg=16475.51, stdev=5880.36 00:09:04.808 lat (usec): min=6309, max=47067, avg=16596.85, stdev=5976.84 00:09:04.808 clat percentiles (usec): 00:09:04.808 | 1.00th=[ 8979], 5.00th=[10159], 10.00th=[11469], 20.00th=[13173], 00:09:04.808 | 30.00th=[13566], 40.00th=[14091], 50.00th=[14222], 60.00th=[14877], 00:09:04.808 | 70.00th=[16581], 80.00th=[18744], 90.00th=[26084], 95.00th=[30802], 00:09:04.808 | 99.00th=[35390], 99.50th=[35914], 99.90th=[36963], 99.95th=[45876], 00:09:04.808 | 99.99th=[46924] 00:09:04.808 write: IOPS=3769, BW=14.7MiB/s (15.4MB/s)(14.9MiB/1010msec); 0 zone resets 00:09:04.809 slat (usec): min=4, max=11634, avg=124.10, stdev=747.47 00:09:04.809 clat (usec): min=273, max=65639, avg=18139.15, stdev=12762.10 00:09:04.809 lat (usec): min=520, max=65647, avg=18263.25, stdev=12848.96 00:09:04.809 clat percentiles (usec): 00:09:04.809 | 1.00th=[ 1745], 5.00th=[ 5014], 10.00th=[ 7177], 20.00th=[ 9896], 00:09:04.809 | 30.00th=[11731], 40.00th=[13042], 50.00th=[14484], 60.00th=[15664], 00:09:04.809 | 70.00th=[18744], 80.00th=[23200], 90.00th=[36439], 95.00th=[46924], 00:09:04.809 | 99.00th=[62129], 99.50th=[63177], 99.90th=[65799], 99.95th=[65799], 00:09:04.809 | 99.99th=[65799] 00:09:04.809 bw ( KiB/s): min=13512, max=15920, per=24.07%, avg=14716.00, stdev=1702.71, samples=2 00:09:04.809 iops : min= 3378, max= 3980, avg=3679.00, stdev=425.68, samples=2 00:09:04.809 lat (usec) : 500=0.03%, 750=0.08% 00:09:04.809 lat (msec) : 2=0.43%, 4=1.43%, 10=10.36%, 20=64.43%, 50=20.89% 00:09:04.809 lat (msec) : 100=2.34% 00:09:04.809 cpu : usr=3.96%, sys=7.63%, ctx=304, majf=0, minf=1 00:09:04.809 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:09:04.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:04.809 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:04.809 issued rwts: total=3584,3807,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:04.809 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:04.809 00:09:04.809 Run status group 0 (all jobs): 00:09:04.809 READ: bw=56.3MiB/s (59.1MB/s), 11.4MiB/s-19.1MiB/s (11.9MB/s-20.1MB/s), io=59.1MiB (62.0MB), run=1002-1049msec 00:09:04.809 WRITE: bw=59.7MiB/s (62.6MB/s), 11.4MiB/s-20.0MiB/s (12.0MB/s-20.9MB/s), io=62.6MiB (65.7MB), run=1002-1049msec 00:09:04.809 00:09:04.809 Disk stats (read/write): 00:09:04.809 nvme0n1: ios=4136/4175, merge=0/0, ticks=22863/25474, in_queue=48337, util=100.00% 00:09:04.809 nvme0n2: ios=3085/3122, merge=0/0, ticks=16723/20146, in_queue=36869, util=86.29% 00:09:04.809 nvme0n3: ios=2619/2956, merge=0/0, ticks=21822/26935, in_queue=48757, util=98.12% 00:09:04.809 nvme0n4: ios=3156/3584, merge=0/0, ticks=45105/57520, in_queue=102625, util=97.80% 00:09:04.809 14:45:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:04.809 [global] 00:09:04.809 thread=1 00:09:04.809 invalidate=1 00:09:04.809 rw=randwrite 00:09:04.809 time_based=1 00:09:04.809 runtime=1 00:09:04.809 ioengine=libaio 00:09:04.809 direct=1 00:09:04.809 bs=4096 00:09:04.809 iodepth=128 00:09:04.809 norandommap=0 00:09:04.809 numjobs=1 00:09:04.809 00:09:04.809 verify_dump=1 00:09:04.809 verify_backlog=512 00:09:04.809 verify_state_save=0 00:09:04.809 do_verify=1 00:09:04.809 verify=crc32c-intel 00:09:04.809 [job0] 00:09:04.809 filename=/dev/nvme0n1 00:09:04.809 [job1] 00:09:04.809 filename=/dev/nvme0n2 00:09:04.809 [job2] 00:09:04.809 filename=/dev/nvme0n3 00:09:04.809 [job3] 00:09:04.809 filename=/dev/nvme0n4 00:09:04.809 Could not set queue depth (nvme0n1) 00:09:04.809 Could not set queue depth (nvme0n2) 00:09:04.809 Could not set queue depth (nvme0n3) 00:09:04.809 Could not set queue depth (nvme0n4) 00:09:04.809 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:04.809 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:04.809 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:04.809 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:04.809 fio-3.35 00:09:04.809 Starting 4 threads 00:09:06.206 00:09:06.206 job0: (groupid=0, jobs=1): err= 0: pid=599745: Wed Dec 11 14:45:48 2024 00:09:06.206 read: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec) 00:09:06.206 slat (usec): min=2, max=22690, avg=174.29, stdev=1249.86 00:09:06.206 clat (usec): min=5505, max=97871, avg=22750.94, stdev=18589.35 00:09:06.206 lat (usec): min=5512, max=97906, avg=22925.23, stdev=18742.04 00:09:06.206 clat percentiles (usec): 00:09:06.206 | 1.00th=[ 6718], 5.00th=[ 9634], 10.00th=[10290], 20.00th=[10683], 00:09:06.206 | 30.00th=[11600], 40.00th=[15926], 50.00th=[17171], 60.00th=[17695], 00:09:06.206 | 70.00th=[19792], 80.00th=[26608], 90.00th=[54789], 95.00th=[70779], 00:09:06.206 | 99.00th=[85459], 99.50th=[85459], 99.90th=[94897], 99.95th=[98042], 00:09:06.206 | 99.99th=[98042] 00:09:06.206 write: IOPS=3082, BW=12.0MiB/s (12.6MB/s)(12.1MiB/1005msec); 0 zone resets 00:09:06.206 slat (usec): min=3, max=12260, avg=131.57, stdev=717.79 00:09:06.206 clat (usec): min=921, max=70564, avg=18441.65, stdev=13776.10 00:09:06.206 lat (usec): min=4082, max=70583, avg=18573.22, stdev=13846.34 00:09:06.206 clat percentiles (usec): 00:09:06.206 | 1.00th=[ 5866], 5.00th=[ 8160], 10.00th=[ 9110], 20.00th=[10159], 00:09:06.206 | 30.00th=[10552], 40.00th=[10814], 50.00th=[11076], 60.00th=[17957], 00:09:06.206 | 70.00th=[20579], 80.00th=[22676], 90.00th=[40633], 95.00th=[55313], 00:09:06.206 | 99.00th=[64750], 99.50th=[65274], 99.90th=[65274], 99.95th=[65274], 00:09:06.206 | 99.99th=[70779] 00:09:06.206 bw ( KiB/s): min= 8192, max=16384, per=21.86%, avg=12288.00, stdev=5792.62, samples=2 00:09:06.206 iops : min= 2048, max= 4096, avg=3072.00, stdev=1448.15, samples=2 00:09:06.206 lat (usec) : 1000=0.02% 00:09:06.206 lat (msec) : 4=0.02%, 10=12.27%, 20=55.66%, 50=23.18%, 100=8.87% 00:09:06.206 cpu : usr=3.19%, sys=7.07%, ctx=298, majf=0, minf=1 00:09:06.206 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:09:06.206 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:06.206 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:06.206 issued rwts: total=3072,3098,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:06.206 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:06.206 job1: (groupid=0, jobs=1): err= 0: pid=599746: Wed Dec 11 14:45:48 2024 00:09:06.206 read: IOPS=2904, BW=11.3MiB/s (11.9MB/s)(11.4MiB/1005msec) 00:09:06.206 slat (usec): min=2, max=12823, avg=143.40, stdev=921.28 00:09:06.206 clat (usec): min=4798, max=50310, avg=17109.69, stdev=7383.20 00:09:06.206 lat (usec): min=4805, max=50323, avg=17253.09, stdev=7472.86 00:09:06.206 clat percentiles (usec): 00:09:06.206 | 1.00th=[ 5080], 5.00th=[10290], 10.00th=[10814], 20.00th=[11338], 00:09:06.206 | 30.00th=[12256], 40.00th=[12780], 50.00th=[14877], 60.00th=[18482], 00:09:06.206 | 70.00th=[19530], 80.00th=[21103], 90.00th=[25035], 95.00th=[33162], 00:09:06.206 | 99.00th=[42206], 99.50th=[45876], 99.90th=[48497], 99.95th=[49546], 00:09:06.206 | 99.99th=[50070] 00:09:06.206 write: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec); 0 zone resets 00:09:06.206 slat (usec): min=3, max=29511, avg=182.77, stdev=1007.95 00:09:06.206 clat (usec): min=7060, max=58222, avg=24908.02, stdev=12098.51 00:09:06.206 lat (usec): min=7067, max=58241, avg=25090.80, stdev=12183.87 00:09:06.206 clat percentiles (usec): 00:09:06.206 | 1.00th=[ 9503], 5.00th=[10945], 10.00th=[11338], 20.00th=[11600], 00:09:06.206 | 30.00th=[15139], 40.00th=[19792], 50.00th=[22938], 60.00th=[27657], 00:09:06.206 | 70.00th=[31851], 80.00th=[35390], 90.00th=[44827], 95.00th=[46924], 00:09:06.206 | 99.00th=[50594], 99.50th=[50594], 99.90th=[54264], 99.95th=[54264], 00:09:06.206 | 99.99th=[58459] 00:09:06.206 bw ( KiB/s): min=12288, max=12288, per=21.86%, avg=12288.00, stdev= 0.00, samples=2 00:09:06.206 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:09:06.206 lat (msec) : 10=3.25%, 20=56.95%, 50=38.93%, 100=0.87% 00:09:06.206 cpu : usr=2.39%, sys=4.08%, ctx=367, majf=0, minf=1 00:09:06.206 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:09:06.206 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:06.206 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:06.206 issued rwts: total=2919,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:06.206 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:06.206 job2: (groupid=0, jobs=1): err= 0: pid=599749: Wed Dec 11 14:45:48 2024 00:09:06.206 read: IOPS=3552, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1009msec) 00:09:06.206 slat (usec): min=2, max=14154, avg=121.54, stdev=741.93 00:09:06.206 clat (usec): min=6150, max=38359, avg=15550.30, stdev=4251.02 00:09:06.206 lat (usec): min=6163, max=38368, avg=15671.84, stdev=4321.50 00:09:06.206 clat percentiles (usec): 00:09:06.206 | 1.00th=[ 8455], 5.00th=[11469], 10.00th=[12518], 20.00th=[12780], 00:09:06.206 | 30.00th=[13042], 40.00th=[13566], 50.00th=[13829], 60.00th=[14484], 00:09:06.206 | 70.00th=[16188], 80.00th=[18482], 90.00th=[20317], 95.00th=[24249], 00:09:06.206 | 99.00th=[30016], 99.50th=[33817], 99.90th=[38536], 99.95th=[38536], 00:09:06.206 | 99.99th=[38536] 00:09:06.206 write: IOPS=3709, BW=14.5MiB/s (15.2MB/s)(14.6MiB/1009msec); 0 zone resets 00:09:06.206 slat (usec): min=3, max=30252, avg=142.52, stdev=930.14 00:09:06.206 clat (usec): min=953, max=81600, avg=19307.18, stdev=11108.76 00:09:06.206 lat (usec): min=960, max=81606, avg=19449.70, stdev=11206.87 00:09:06.206 clat percentiles (usec): 00:09:06.206 | 1.00th=[ 5669], 5.00th=[ 8979], 10.00th=[11863], 20.00th=[12387], 00:09:06.206 | 30.00th=[12649], 40.00th=[13042], 50.00th=[14746], 60.00th=[18744], 00:09:06.206 | 70.00th=[20055], 80.00th=[27657], 90.00th=[32375], 95.00th=[36963], 00:09:06.206 | 99.00th=[62129], 99.50th=[64226], 99.90th=[81265], 99.95th=[81265], 00:09:06.207 | 99.99th=[81265] 00:09:06.207 bw ( KiB/s): min=12336, max=16592, per=25.73%, avg=14464.00, stdev=3009.45, samples=2 00:09:06.207 iops : min= 3084, max= 4148, avg=3616.00, stdev=752.36, samples=2 00:09:06.207 lat (usec) : 1000=0.07% 00:09:06.207 lat (msec) : 10=4.59%, 20=74.96%, 50=18.58%, 100=1.82% 00:09:06.207 cpu : usr=3.97%, sys=6.05%, ctx=294, majf=0, minf=1 00:09:06.207 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:09:06.207 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:06.207 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:06.207 issued rwts: total=3584,3743,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:06.207 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:06.207 job3: (groupid=0, jobs=1): err= 0: pid=599750: Wed Dec 11 14:45:48 2024 00:09:06.207 read: IOPS=4043, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1013msec) 00:09:06.207 slat (usec): min=2, max=13737, avg=117.06, stdev=765.72 00:09:06.207 clat (usec): min=5397, max=59460, avg=14757.71, stdev=5515.24 00:09:06.207 lat (usec): min=5423, max=59465, avg=14874.77, stdev=5548.31 00:09:06.207 clat percentiles (usec): 00:09:06.207 | 1.00th=[ 6783], 5.00th=[ 9634], 10.00th=[10552], 20.00th=[11731], 00:09:06.207 | 30.00th=[12256], 40.00th=[12780], 50.00th=[13304], 60.00th=[13960], 00:09:06.207 | 70.00th=[15139], 80.00th=[16712], 90.00th=[19268], 95.00th=[26608], 00:09:06.207 | 99.00th=[45351], 99.50th=[45876], 99.90th=[45876], 99.95th=[45876], 00:09:06.207 | 99.99th=[59507] 00:09:06.207 write: IOPS=4270, BW=16.7MiB/s (17.5MB/s)(16.9MiB/1013msec); 0 zone resets 00:09:06.207 slat (usec): min=3, max=16925, avg=108.37, stdev=598.95 00:09:06.207 clat (usec): min=3327, max=48849, avg=15717.33, stdev=6838.18 00:09:06.207 lat (usec): min=3333, max=48855, avg=15825.70, stdev=6880.55 00:09:06.207 clat percentiles (usec): 00:09:06.207 | 1.00th=[ 4883], 5.00th=[ 6718], 10.00th=[ 8586], 20.00th=[10552], 00:09:06.207 | 30.00th=[11469], 40.00th=[12125], 50.00th=[12780], 60.00th=[15008], 00:09:06.207 | 70.00th=[20317], 80.00th=[23200], 90.00th=[24511], 95.00th=[27395], 00:09:06.207 | 99.00th=[31327], 99.50th=[39584], 99.90th=[49021], 99.95th=[49021], 00:09:06.207 | 99.99th=[49021] 00:09:06.207 bw ( KiB/s): min=15560, max=18032, per=29.87%, avg=16796.00, stdev=1747.97, samples=2 00:09:06.207 iops : min= 3890, max= 4508, avg=4199.00, stdev=436.99, samples=2 00:09:06.207 lat (msec) : 4=0.19%, 10=10.73%, 20=68.45%, 50=20.61%, 100=0.01% 00:09:06.207 cpu : usr=6.52%, sys=9.19%, ctx=400, majf=0, minf=1 00:09:06.207 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:09:06.207 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:06.207 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:06.207 issued rwts: total=4096,4326,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:06.207 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:06.207 00:09:06.207 Run status group 0 (all jobs): 00:09:06.207 READ: bw=52.7MiB/s (55.3MB/s), 11.3MiB/s-15.8MiB/s (11.9MB/s-16.6MB/s), io=53.4MiB (56.0MB), run=1005-1013msec 00:09:06.207 WRITE: bw=54.9MiB/s (57.6MB/s), 11.9MiB/s-16.7MiB/s (12.5MB/s-17.5MB/s), io=55.6MiB (58.3MB), run=1005-1013msec 00:09:06.207 00:09:06.207 Disk stats (read/write): 00:09:06.207 nvme0n1: ios=2758/3072, merge=0/0, ticks=23875/23919, in_queue=47794, util=96.59% 00:09:06.207 nvme0n2: ios=2412/2560, merge=0/0, ticks=19325/28651, in_queue=47976, util=97.76% 00:09:06.207 nvme0n3: ios=3130/3191, merge=0/0, ticks=23416/30168, in_queue=53584, util=97.59% 00:09:06.207 nvme0n4: ios=3212/3584, merge=0/0, ticks=40254/50455, in_queue=90709, util=98.31% 00:09:06.207 14:45:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:06.207 14:45:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=599891 00:09:06.207 14:45:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:06.207 14:45:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:06.207 [global] 00:09:06.207 thread=1 00:09:06.207 invalidate=1 00:09:06.207 rw=read 00:09:06.207 time_based=1 00:09:06.207 runtime=10 00:09:06.207 ioengine=libaio 00:09:06.207 direct=1 00:09:06.207 bs=4096 00:09:06.207 iodepth=1 00:09:06.207 norandommap=1 00:09:06.207 numjobs=1 00:09:06.207 00:09:06.207 [job0] 00:09:06.207 filename=/dev/nvme0n1 00:09:06.207 [job1] 00:09:06.207 filename=/dev/nvme0n2 00:09:06.207 [job2] 00:09:06.207 filename=/dev/nvme0n3 00:09:06.207 [job3] 00:09:06.207 filename=/dev/nvme0n4 00:09:06.207 Could not set queue depth (nvme0n1) 00:09:06.207 Could not set queue depth (nvme0n2) 00:09:06.207 Could not set queue depth (nvme0n3) 00:09:06.207 Could not set queue depth (nvme0n4) 00:09:06.207 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:06.207 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:06.207 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:06.207 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:06.207 fio-3.35 00:09:06.207 Starting 4 threads 00:09:09.485 14:45:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:09.485 14:45:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:09.485 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=2154496, buflen=4096 00:09:09.486 fio: pid=600077, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:09.743 14:45:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:09.743 14:45:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:09.743 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=7344128, buflen=4096 00:09:09.743 fio: pid=600058, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:10.001 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=15245312, buflen=4096 00:09:10.001 fio: pid=599989, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:10.001 14:45:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:10.001 14:45:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:10.259 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=15974400, buflen=4096 00:09:10.259 fio: pid=600005, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:10.259 14:45:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:10.259 14:45:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:10.259 00:09:10.259 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=599989: Wed Dec 11 14:45:52 2024 00:09:10.259 read: IOPS=1083, BW=4332KiB/s (4436kB/s)(14.5MiB/3437msec) 00:09:10.259 slat (usec): min=5, max=7924, avg=16.49, stdev=129.76 00:09:10.259 clat (usec): min=178, max=42323, avg=896.57, stdev=5134.97 00:09:10.259 lat (usec): min=185, max=49036, avg=913.05, stdev=5153.79 00:09:10.259 clat percentiles (usec): 00:09:10.259 | 1.00th=[ 190], 5.00th=[ 202], 10.00th=[ 212], 20.00th=[ 225], 00:09:10.259 | 30.00th=[ 233], 40.00th=[ 237], 50.00th=[ 241], 60.00th=[ 245], 00:09:10.259 | 70.00th=[ 249], 80.00th=[ 255], 90.00th=[ 269], 95.00th=[ 355], 00:09:10.259 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:10.259 | 99.99th=[42206] 00:09:10.259 bw ( KiB/s): min= 96, max=15592, per=45.56%, avg=4890.67, stdev=7196.65, samples=6 00:09:10.259 iops : min= 24, max= 3898, avg=1222.67, stdev=1799.16, samples=6 00:09:10.259 lat (usec) : 250=70.67%, 500=27.53%, 750=0.11%, 1000=0.03% 00:09:10.259 lat (msec) : 2=0.03%, 10=0.03%, 50=1.58% 00:09:10.259 cpu : usr=0.87%, sys=2.53%, ctx=3726, majf=0, minf=1 00:09:10.259 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:10.259 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:10.259 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:10.259 issued rwts: total=3723,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:10.259 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:10.259 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=600005: Wed Dec 11 14:45:52 2024 00:09:10.259 read: IOPS=1052, BW=4211KiB/s (4312kB/s)(15.2MiB/3705msec) 00:09:10.259 slat (usec): min=5, max=7866, avg=14.15, stdev=125.86 00:09:10.259 clat (usec): min=180, max=58728, avg=929.63, stdev=5279.20 00:09:10.259 lat (usec): min=187, max=58739, avg=943.78, stdev=5297.35 00:09:10.259 clat percentiles (usec): 00:09:10.259 | 1.00th=[ 186], 5.00th=[ 194], 10.00th=[ 198], 20.00th=[ 208], 00:09:10.259 | 30.00th=[ 219], 40.00th=[ 227], 50.00th=[ 233], 60.00th=[ 237], 00:09:10.259 | 70.00th=[ 241], 80.00th=[ 249], 90.00th=[ 269], 95.00th=[ 396], 00:09:10.259 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:09:10.259 | 99.99th=[58983] 00:09:10.259 bw ( KiB/s): min= 133, max=16032, per=41.44%, avg=4448.71, stdev=6178.75, samples=7 00:09:10.259 iops : min= 33, max= 4008, avg=1112.14, stdev=1544.72, samples=7 00:09:10.259 lat (usec) : 250=82.29%, 500=15.56%, 750=0.36%, 1000=0.03% 00:09:10.259 lat (msec) : 2=0.03%, 4=0.03%, 50=1.67%, 100=0.03% 00:09:10.259 cpu : usr=1.05%, sys=1.86%, ctx=3904, majf=0, minf=2 00:09:10.259 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:10.259 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:10.259 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:10.259 issued rwts: total=3901,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:10.259 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:10.260 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=600058: Wed Dec 11 14:45:52 2024 00:09:10.260 read: IOPS=569, BW=2278KiB/s (2332kB/s)(7172KiB/3149msec) 00:09:10.260 slat (nsec): min=6693, max=50935, avg=14028.78, stdev=5571.31 00:09:10.260 clat (usec): min=194, max=42026, avg=1725.05, stdev=7655.11 00:09:10.260 lat (usec): min=203, max=42044, avg=1739.08, stdev=7656.66 00:09:10.260 clat percentiles (usec): 00:09:10.260 | 1.00th=[ 200], 5.00th=[ 206], 10.00th=[ 210], 20.00th=[ 219], 00:09:10.260 | 30.00th=[ 231], 40.00th=[ 239], 50.00th=[ 243], 60.00th=[ 249], 00:09:10.260 | 70.00th=[ 253], 80.00th=[ 260], 90.00th=[ 273], 95.00th=[ 302], 00:09:10.260 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:10.260 | 99.99th=[42206] 00:09:10.260 bw ( KiB/s): min= 96, max=11784, per=22.22%, avg=2385.33, stdev=4669.87, samples=6 00:09:10.260 iops : min= 24, max= 2946, avg=596.33, stdev=1167.47, samples=6 00:09:10.260 lat (usec) : 250=64.60%, 500=31.66%, 750=0.06% 00:09:10.260 lat (msec) : 50=3.62% 00:09:10.260 cpu : usr=0.51%, sys=1.18%, ctx=1797, majf=0, minf=2 00:09:10.260 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:10.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:10.260 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:10.260 issued rwts: total=1794,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:10.260 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:10.260 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=600077: Wed Dec 11 14:45:52 2024 00:09:10.260 read: IOPS=184, BW=736KiB/s (753kB/s)(2104KiB/2860msec) 00:09:10.260 slat (nsec): min=4543, max=34538, avg=12114.78, stdev=7068.45 00:09:10.260 clat (usec): min=195, max=42016, avg=5375.77, stdev=13492.50 00:09:10.260 lat (usec): min=202, max=42049, avg=5387.89, stdev=13496.38 00:09:10.260 clat percentiles (usec): 00:09:10.260 | 1.00th=[ 215], 5.00th=[ 229], 10.00th=[ 239], 20.00th=[ 247], 00:09:10.260 | 30.00th=[ 258], 40.00th=[ 277], 50.00th=[ 314], 60.00th=[ 367], 00:09:10.260 | 70.00th=[ 396], 80.00th=[ 429], 90.00th=[41157], 95.00th=[41157], 00:09:10.260 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:10.260 | 99.99th=[42206] 00:09:10.260 bw ( KiB/s): min= 96, max= 2136, per=7.71%, avg=827.20, stdev=891.80, samples=5 00:09:10.260 iops : min= 24, max= 534, avg=206.80, stdev=222.95, samples=5 00:09:10.260 lat (usec) : 250=21.82%, 500=64.71%, 750=0.95% 00:09:10.260 lat (msec) : 50=12.33% 00:09:10.260 cpu : usr=0.10%, sys=0.28%, ctx=527, majf=0, minf=1 00:09:10.260 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:10.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:10.260 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:10.260 issued rwts: total=527,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:10.260 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:10.260 00:09:10.260 Run status group 0 (all jobs): 00:09:10.260 READ: bw=10.5MiB/s (11.0MB/s), 736KiB/s-4332KiB/s (753kB/s-4436kB/s), io=38.8MiB (40.7MB), run=2860-3705msec 00:09:10.260 00:09:10.260 Disk stats (read/write): 00:09:10.260 nvme0n1: ios=3719/0, merge=0/0, ticks=3162/0, in_queue=3162, util=94.88% 00:09:10.260 nvme0n2: ios=3895/0, merge=0/0, ticks=3461/0, in_queue=3461, util=95.73% 00:09:10.260 nvme0n3: ios=1836/0, merge=0/0, ticks=4170/0, in_queue=4170, util=100.00% 00:09:10.260 nvme0n4: ios=525/0, merge=0/0, ticks=2787/0, in_queue=2787, util=96.71% 00:09:10.517 14:45:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:10.517 14:45:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:10.775 14:45:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:10.775 14:45:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:11.033 14:45:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:11.033 14:45:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:11.291 14:45:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:11.291 14:45:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:11.549 14:45:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:11.549 14:45:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 599891 00:09:11.549 14:45:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:11.549 14:45:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:11.807 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:11.807 14:45:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:11.807 14:45:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:09:11.807 14:45:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:11.807 14:45:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:11.807 14:45:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:11.807 14:45:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:11.807 14:45:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:09:11.807 14:45:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:11.807 14:45:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:11.807 nvmf hotplug test: fio failed as expected 00:09:11.807 14:45:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:12.065 14:45:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:12.065 14:45:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:12.065 14:45:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:12.065 14:45:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:12.065 14:45:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:12.065 14:45:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:12.065 14:45:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:09:12.065 14:45:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:12.065 14:45:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:09:12.065 14:45:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:12.065 14:45:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:12.065 rmmod nvme_tcp 00:09:12.065 rmmod nvme_fabrics 00:09:12.065 rmmod nvme_keyring 00:09:12.065 14:45:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:12.065 14:45:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:09:12.065 14:45:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:09:12.065 14:45:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 597856 ']' 00:09:12.065 14:45:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 597856 00:09:12.065 14:45:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 597856 ']' 00:09:12.065 14:45:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 597856 00:09:12.065 14:45:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:09:12.065 14:45:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:12.065 14:45:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 597856 00:09:12.065 14:45:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:12.065 14:45:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:12.065 14:45:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 597856' 00:09:12.065 killing process with pid 597856 00:09:12.065 14:45:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 597856 00:09:12.065 14:45:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 597856 00:09:12.324 14:45:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:12.324 14:45:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:12.324 14:45:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:12.324 14:45:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:09:12.324 14:45:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:12.324 14:45:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:09:12.324 14:45:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:09:12.324 14:45:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:12.324 14:45:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:12.324 14:45:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:12.324 14:45:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:12.324 14:45:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:14.861 14:45:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:14.861 00:09:14.861 real 0m24.258s 00:09:14.861 user 1m25.590s 00:09:14.861 sys 0m6.474s 00:09:14.861 14:45:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:14.861 14:45:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:14.861 ************************************ 00:09:14.861 END TEST nvmf_fio_target 00:09:14.861 ************************************ 00:09:14.861 14:45:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:14.861 14:45:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:14.861 14:45:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:14.861 14:45:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:14.861 ************************************ 00:09:14.861 START TEST nvmf_bdevio 00:09:14.861 ************************************ 00:09:14.861 14:45:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:14.861 * Looking for test storage... 00:09:14.861 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:14.861 14:45:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:14.861 14:45:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:09:14.861 14:45:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:14.861 14:45:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:14.861 14:45:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:14.861 14:45:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:14.861 14:45:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:14.861 14:45:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:09:14.861 14:45:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:09:14.861 14:45:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:09:14.861 14:45:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:09:14.861 14:45:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:09:14.861 14:45:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:09:14.861 14:45:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:09:14.861 14:45:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:14.861 14:45:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:09:14.861 14:45:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:09:14.862 14:45:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:14.862 14:45:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:14.862 14:45:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:09:14.862 14:45:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:09:14.862 14:45:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:14.862 14:45:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:09:14.862 14:45:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:09:14.862 14:45:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:09:14.862 14:45:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:09:14.862 14:45:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:14.862 14:45:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:09:14.862 14:45:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:09:14.862 14:45:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:14.862 14:45:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:14.862 14:45:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:09:14.862 14:45:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:14.862 14:45:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:14.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.862 --rc genhtml_branch_coverage=1 00:09:14.862 --rc genhtml_function_coverage=1 00:09:14.862 --rc genhtml_legend=1 00:09:14.862 --rc geninfo_all_blocks=1 00:09:14.862 --rc geninfo_unexecuted_blocks=1 00:09:14.862 00:09:14.862 ' 00:09:14.862 14:45:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:14.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.862 --rc genhtml_branch_coverage=1 00:09:14.862 --rc genhtml_function_coverage=1 00:09:14.862 --rc genhtml_legend=1 00:09:14.862 --rc geninfo_all_blocks=1 00:09:14.862 --rc geninfo_unexecuted_blocks=1 00:09:14.862 00:09:14.862 ' 00:09:14.862 14:45:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:14.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.862 --rc genhtml_branch_coverage=1 00:09:14.862 --rc genhtml_function_coverage=1 00:09:14.862 --rc genhtml_legend=1 00:09:14.862 --rc geninfo_all_blocks=1 00:09:14.862 --rc geninfo_unexecuted_blocks=1 00:09:14.862 00:09:14.862 ' 00:09:14.862 14:45:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:14.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.862 --rc genhtml_branch_coverage=1 00:09:14.862 --rc genhtml_function_coverage=1 00:09:14.862 --rc genhtml_legend=1 00:09:14.862 --rc geninfo_all_blocks=1 00:09:14.862 --rc geninfo_unexecuted_blocks=1 00:09:14.862 00:09:14.862 ' 00:09:14.862 14:45:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:14.862 14:45:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:09:14.862 14:45:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:14.862 14:45:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:14.862 14:45:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:14.862 14:45:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:14.862 14:45:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:14.862 14:45:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:14.862 14:45:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:14.862 14:45:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:14.862 14:45:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:14.862 14:45:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:14.862 14:45:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:14.862 14:45:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:14.862 14:45:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:14.862 14:45:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:14.862 14:45:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:14.862 14:45:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:14.862 14:45:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:14.862 14:45:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:09:14.862 14:45:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:14.862 14:45:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:14.862 14:45:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:14.862 14:45:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.862 14:45:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.862 14:45:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.862 14:45:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:09:14.862 14:45:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.862 14:45:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:09:14.862 14:45:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:14.862 14:45:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:14.862 14:45:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:14.862 14:45:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:14.862 14:45:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:14.862 14:45:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:14.862 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:14.862 14:45:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:14.862 14:45:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:14.862 14:45:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:14.862 14:45:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:14.862 14:45:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:14.862 14:45:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:09:14.862 14:45:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:14.862 14:45:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:14.862 14:45:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:14.862 14:45:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:14.862 14:45:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:14.862 14:45:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:14.862 14:45:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:14.862 14:45:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:14.862 14:45:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:14.862 14:45:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:14.862 14:45:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:09:14.862 14:45:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:16.768 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:16.768 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:09:16.768 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:16.768 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:16.768 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:16.768 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:16.768 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:16.768 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:09:16.768 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:16.768 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:09:16.768 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:09:16.768 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:09:16.768 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:09:16.768 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:09:16.768 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:09:16.768 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:16.768 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:16.768 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:16.768 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:16.768 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:16.768 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:16.768 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:16.768 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:16.768 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:16.768 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:16.768 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:16.768 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:16.768 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:16.768 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:16.768 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:16.768 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:16.768 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:16.768 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:16.768 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:16.768 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:16.768 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:16.768 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:16.768 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:16.768 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:16.768 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:16.768 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:16.768 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:16.768 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:16.768 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:16.768 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:16.768 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:16.768 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:16.768 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:16.768 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:16.768 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:16.768 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:16.768 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:16.769 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:16.769 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:16.769 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:16.769 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:16.769 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:16.769 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:16.769 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:16.769 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:16.769 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:16.769 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:16.769 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:16.769 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:16.769 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:16.769 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:16.769 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:16.769 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:16.769 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:16.769 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:16.769 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:16.769 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:16.769 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:16.769 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:09:16.769 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:16.769 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:16.769 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:16.769 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:16.769 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:16.769 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:16.769 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:16.769 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:16.769 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:16.769 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:16.769 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:16.769 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:16.769 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:16.769 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:16.769 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:16.769 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:16.769 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:16.769 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:16.769 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:16.769 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:16.769 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:16.769 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:16.769 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:17.028 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:17.028 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:17.028 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:17.028 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:17.028 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.321 ms 00:09:17.028 00:09:17.028 --- 10.0.0.2 ping statistics --- 00:09:17.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:17.028 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:09:17.028 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:17.028 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:17.028 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:09:17.028 00:09:17.028 --- 10.0.0.1 ping statistics --- 00:09:17.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:17.028 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:09:17.028 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:17.028 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:09:17.028 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:17.028 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:17.028 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:17.028 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:17.028 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:17.028 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:17.028 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:17.029 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:09:17.029 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:17.029 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:17.029 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:17.029 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=602742 00:09:17.029 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:09:17.029 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 602742 00:09:17.029 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 602742 ']' 00:09:17.029 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:17.029 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:17.029 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:17.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:17.029 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:17.029 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:17.029 [2024-12-11 14:45:59.652062] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:09:17.029 [2024-12-11 14:45:59.652148] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:17.029 [2024-12-11 14:45:59.726052] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:17.029 [2024-12-11 14:45:59.780873] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:17.029 [2024-12-11 14:45:59.780945] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:17.029 [2024-12-11 14:45:59.780982] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:17.029 [2024-12-11 14:45:59.780993] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:17.029 [2024-12-11 14:45:59.781003] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:17.029 [2024-12-11 14:45:59.782650] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:09:17.029 [2024-12-11 14:45:59.782673] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:09:17.029 [2024-12-11 14:45:59.782699] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:09:17.029 [2024-12-11 14:45:59.782702] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:09:17.287 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:17.287 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:09:17.287 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:17.287 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:17.287 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:17.287 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:17.287 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:17.287 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.287 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:17.287 [2024-12-11 14:45:59.936508] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:17.287 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.287 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:17.287 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.287 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:17.287 Malloc0 00:09:17.287 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.287 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:17.287 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.287 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:17.287 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.287 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:17.287 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.287 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:17.287 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.287 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:17.287 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.287 14:45:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:17.287 [2024-12-11 14:46:00.002185] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:17.287 14:46:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.287 14:46:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:09:17.287 14:46:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:09:17.287 14:46:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:09:17.287 14:46:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:09:17.287 14:46:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:17.287 14:46:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:17.287 { 00:09:17.287 "params": { 00:09:17.287 "name": "Nvme$subsystem", 00:09:17.287 "trtype": "$TEST_TRANSPORT", 00:09:17.287 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:17.287 "adrfam": "ipv4", 00:09:17.287 "trsvcid": "$NVMF_PORT", 00:09:17.287 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:17.287 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:17.287 "hdgst": ${hdgst:-false}, 00:09:17.287 "ddgst": ${ddgst:-false} 00:09:17.287 }, 00:09:17.287 "method": "bdev_nvme_attach_controller" 00:09:17.287 } 00:09:17.287 EOF 00:09:17.287 )") 00:09:17.287 14:46:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:09:17.287 14:46:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:09:17.287 14:46:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:09:17.287 14:46:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:17.287 "params": { 00:09:17.287 "name": "Nvme1", 00:09:17.287 "trtype": "tcp", 00:09:17.287 "traddr": "10.0.0.2", 00:09:17.287 "adrfam": "ipv4", 00:09:17.287 "trsvcid": "4420", 00:09:17.287 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:17.288 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:17.288 "hdgst": false, 00:09:17.288 "ddgst": false 00:09:17.288 }, 00:09:17.288 "method": "bdev_nvme_attach_controller" 00:09:17.288 }' 00:09:17.288 [2024-12-11 14:46:00.053969] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:09:17.288 [2024-12-11 14:46:00.054049] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid602765 ] 00:09:17.546 [2024-12-11 14:46:00.124490] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:17.546 [2024-12-11 14:46:00.189999] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:17.546 [2024-12-11 14:46:00.190049] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:09:17.546 [2024-12-11 14:46:00.190053] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.803 I/O targets: 00:09:17.803 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:09:17.803 00:09:17.803 00:09:17.803 CUnit - A unit testing framework for C - Version 2.1-3 00:09:17.803 http://cunit.sourceforge.net/ 00:09:17.803 00:09:17.803 00:09:17.803 Suite: bdevio tests on: Nvme1n1 00:09:17.803 Test: blockdev write read block ...passed 00:09:18.061 Test: blockdev write zeroes read block ...passed 00:09:18.061 Test: blockdev write zeroes read no split ...passed 00:09:18.061 Test: blockdev write zeroes read split ...passed 00:09:18.061 Test: blockdev write zeroes read split partial ...passed 00:09:18.062 Test: blockdev reset ...[2024-12-11 14:46:00.647414] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:09:18.062 [2024-12-11 14:46:00.647519] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18c5920 (9): Bad file descriptor 00:09:18.062 [2024-12-11 14:46:00.704284] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:09:18.062 passed 00:09:18.062 Test: blockdev write read 8 blocks ...passed 00:09:18.062 Test: blockdev write read size > 128k ...passed 00:09:18.062 Test: blockdev write read invalid size ...passed 00:09:18.062 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:18.062 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:18.062 Test: blockdev write read max offset ...passed 00:09:18.319 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:18.319 Test: blockdev writev readv 8 blocks ...passed 00:09:18.319 Test: blockdev writev readv 30 x 1block ...passed 00:09:18.319 Test: blockdev writev readv block ...passed 00:09:18.319 Test: blockdev writev readv size > 128k ...passed 00:09:18.319 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:18.319 Test: blockdev comparev and writev ...[2024-12-11 14:46:01.000617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:18.319 [2024-12-11 14:46:01.000656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:09:18.319 [2024-12-11 14:46:01.000680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:18.319 [2024-12-11 14:46:01.000699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:09:18.319 [2024-12-11 14:46:01.001008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:18.319 [2024-12-11 14:46:01.001032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:09:18.319 [2024-12-11 14:46:01.001054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:18.319 [2024-12-11 14:46:01.001071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:09:18.319 [2024-12-11 14:46:01.001368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:18.319 [2024-12-11 14:46:01.001393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:09:18.320 [2024-12-11 14:46:01.001415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:18.320 [2024-12-11 14:46:01.001431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:09:18.320 [2024-12-11 14:46:01.001744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:18.320 [2024-12-11 14:46:01.001780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:09:18.320 [2024-12-11 14:46:01.001803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:18.320 [2024-12-11 14:46:01.001820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:09:18.320 passed 00:09:18.320 Test: blockdev nvme passthru rw ...passed 00:09:18.320 Test: blockdev nvme passthru vendor specific ...[2024-12-11 14:46:01.085787] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:18.320 [2024-12-11 14:46:01.085815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:09:18.320 [2024-12-11 14:46:01.085968] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:18.320 [2024-12-11 14:46:01.085991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:09:18.320 [2024-12-11 14:46:01.086135] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:18.320 [2024-12-11 14:46:01.086157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:09:18.320 [2024-12-11 14:46:01.086307] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:18.320 [2024-12-11 14:46:01.086330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:09:18.320 passed 00:09:18.577 Test: blockdev nvme admin passthru ...passed 00:09:18.577 Test: blockdev copy ...passed 00:09:18.577 00:09:18.577 Run Summary: Type Total Ran Passed Failed Inactive 00:09:18.577 suites 1 1 n/a 0 0 00:09:18.577 tests 23 23 23 0 0 00:09:18.577 asserts 152 152 152 0 n/a 00:09:18.577 00:09:18.577 Elapsed time = 1.290 seconds 00:09:18.577 14:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:18.577 14:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.578 14:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:18.835 14:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.836 14:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:09:18.836 14:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:09:18.836 14:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:18.836 14:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:09:18.836 14:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:18.836 14:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:09:18.836 14:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:18.836 14:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:18.836 rmmod nvme_tcp 00:09:18.836 rmmod nvme_fabrics 00:09:18.836 rmmod nvme_keyring 00:09:18.836 14:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:18.836 14:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:09:18.836 14:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:09:18.836 14:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 602742 ']' 00:09:18.836 14:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 602742 00:09:18.836 14:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 602742 ']' 00:09:18.836 14:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 602742 00:09:18.836 14:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:09:18.836 14:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:18.836 14:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 602742 00:09:18.836 14:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:09:18.836 14:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:09:18.836 14:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 602742' 00:09:18.836 killing process with pid 602742 00:09:18.836 14:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 602742 00:09:18.836 14:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 602742 00:09:19.094 14:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:19.094 14:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:19.094 14:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:19.094 14:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:09:19.094 14:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:09:19.094 14:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:19.094 14:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:09:19.094 14:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:19.094 14:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:19.094 14:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:19.094 14:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:19.094 14:46:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:20.997 14:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:20.997 00:09:20.997 real 0m6.618s 00:09:20.997 user 0m11.066s 00:09:20.997 sys 0m2.218s 00:09:20.997 14:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:20.997 14:46:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:20.997 ************************************ 00:09:20.997 END TEST nvmf_bdevio 00:09:20.997 ************************************ 00:09:21.256 14:46:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:21.256 00:09:21.256 real 3m56.778s 00:09:21.256 user 10m17.081s 00:09:21.256 sys 1m7.679s 00:09:21.256 14:46:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:21.256 14:46:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:21.256 ************************************ 00:09:21.256 END TEST nvmf_target_core 00:09:21.256 ************************************ 00:09:21.257 14:46:03 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:21.257 14:46:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:21.257 14:46:03 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:21.257 14:46:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:21.257 ************************************ 00:09:21.257 START TEST nvmf_target_extra 00:09:21.257 ************************************ 00:09:21.257 14:46:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:21.257 * Looking for test storage... 00:09:21.257 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:09:21.257 14:46:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:21.257 14:46:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:09:21.257 14:46:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:21.257 14:46:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:21.257 14:46:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:21.257 14:46:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:21.257 14:46:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:21.257 14:46:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:09:21.257 14:46:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:09:21.257 14:46:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:09:21.257 14:46:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:09:21.257 14:46:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:09:21.257 14:46:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:09:21.257 14:46:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:09:21.257 14:46:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:21.257 14:46:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:09:21.257 14:46:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:09:21.257 14:46:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:21.257 14:46:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:21.257 14:46:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:09:21.257 14:46:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:09:21.257 14:46:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:21.257 14:46:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:09:21.257 14:46:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:09:21.257 14:46:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:09:21.257 14:46:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:09:21.257 14:46:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:21.257 14:46:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:09:21.257 14:46:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:09:21.257 14:46:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:21.257 14:46:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:21.257 14:46:03 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:09:21.257 14:46:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:21.257 14:46:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:21.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.257 --rc genhtml_branch_coverage=1 00:09:21.257 --rc genhtml_function_coverage=1 00:09:21.257 --rc genhtml_legend=1 00:09:21.257 --rc geninfo_all_blocks=1 00:09:21.257 --rc geninfo_unexecuted_blocks=1 00:09:21.257 00:09:21.257 ' 00:09:21.257 14:46:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:21.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.257 --rc genhtml_branch_coverage=1 00:09:21.257 --rc genhtml_function_coverage=1 00:09:21.257 --rc genhtml_legend=1 00:09:21.257 --rc geninfo_all_blocks=1 00:09:21.257 --rc geninfo_unexecuted_blocks=1 00:09:21.257 00:09:21.257 ' 00:09:21.257 14:46:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:21.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.257 --rc genhtml_branch_coverage=1 00:09:21.257 --rc genhtml_function_coverage=1 00:09:21.257 --rc genhtml_legend=1 00:09:21.257 --rc geninfo_all_blocks=1 00:09:21.257 --rc geninfo_unexecuted_blocks=1 00:09:21.257 00:09:21.257 ' 00:09:21.257 14:46:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:21.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.257 --rc genhtml_branch_coverage=1 00:09:21.257 --rc genhtml_function_coverage=1 00:09:21.257 --rc genhtml_legend=1 00:09:21.257 --rc geninfo_all_blocks=1 00:09:21.257 --rc geninfo_unexecuted_blocks=1 00:09:21.257 00:09:21.257 ' 00:09:21.257 14:46:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:21.257 14:46:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:09:21.257 14:46:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:21.257 14:46:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:21.257 14:46:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:21.257 14:46:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:21.257 14:46:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:21.257 14:46:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:21.257 14:46:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:21.257 14:46:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:21.257 14:46:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:21.257 14:46:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:21.257 14:46:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:21.257 14:46:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:21.257 14:46:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:21.257 14:46:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:21.257 14:46:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:21.257 14:46:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:21.257 14:46:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:21.257 14:46:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:09:21.257 14:46:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:21.257 14:46:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:21.257 14:46:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:21.257 14:46:04 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.257 14:46:04 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.257 14:46:04 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.257 14:46:04 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:09:21.257 14:46:04 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.257 14:46:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:09:21.257 14:46:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:21.257 14:46:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:21.257 14:46:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:21.257 14:46:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:21.257 14:46:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:21.257 14:46:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:21.257 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:21.257 14:46:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:21.257 14:46:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:21.257 14:46:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:21.257 14:46:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:21.257 14:46:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:09:21.257 14:46:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:09:21.257 14:46:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:21.257 14:46:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:21.257 14:46:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:21.257 14:46:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:21.517 ************************************ 00:09:21.517 START TEST nvmf_example 00:09:21.517 ************************************ 00:09:21.517 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:21.517 * Looking for test storage... 00:09:21.517 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:21.517 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:21.517 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lcov --version 00:09:21.517 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:21.517 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:21.517 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:21.517 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:21.517 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:21.517 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:09:21.517 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:09:21.517 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:09:21.517 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:09:21.517 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:09:21.517 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:09:21.517 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:09:21.517 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:21.517 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:09:21.517 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:09:21.517 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:21.517 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:21.517 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:09:21.517 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:09:21.517 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:21.517 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:09:21.517 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:09:21.517 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:09:21.517 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:09:21.517 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:21.517 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:09:21.517 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:09:21.517 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:21.517 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:21.517 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:09:21.517 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:21.517 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:21.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.517 --rc genhtml_branch_coverage=1 00:09:21.517 --rc genhtml_function_coverage=1 00:09:21.517 --rc genhtml_legend=1 00:09:21.517 --rc geninfo_all_blocks=1 00:09:21.517 --rc geninfo_unexecuted_blocks=1 00:09:21.517 00:09:21.517 ' 00:09:21.517 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:21.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.517 --rc genhtml_branch_coverage=1 00:09:21.517 --rc genhtml_function_coverage=1 00:09:21.517 --rc genhtml_legend=1 00:09:21.517 --rc geninfo_all_blocks=1 00:09:21.517 --rc geninfo_unexecuted_blocks=1 00:09:21.517 00:09:21.517 ' 00:09:21.517 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:21.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.517 --rc genhtml_branch_coverage=1 00:09:21.517 --rc genhtml_function_coverage=1 00:09:21.517 --rc genhtml_legend=1 00:09:21.517 --rc geninfo_all_blocks=1 00:09:21.517 --rc geninfo_unexecuted_blocks=1 00:09:21.517 00:09:21.517 ' 00:09:21.517 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:21.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.517 --rc genhtml_branch_coverage=1 00:09:21.517 --rc genhtml_function_coverage=1 00:09:21.517 --rc genhtml_legend=1 00:09:21.517 --rc geninfo_all_blocks=1 00:09:21.517 --rc geninfo_unexecuted_blocks=1 00:09:21.517 00:09:21.517 ' 00:09:21.517 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:21.517 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:09:21.517 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:21.517 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:21.517 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:21.517 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:21.517 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:21.517 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:21.517 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:21.517 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:21.517 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:21.517 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:21.517 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:21.517 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:21.517 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:21.518 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:21.518 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:21.518 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:21.518 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:21.518 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:09:21.518 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:21.518 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:21.518 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:21.518 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.518 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.518 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.518 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:09:21.518 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.518 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:09:21.518 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:21.518 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:21.518 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:21.518 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:21.518 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:21.518 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:21.518 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:21.518 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:21.518 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:21.518 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:21.518 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:09:21.518 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:09:21.518 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:09:21.518 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:09:21.518 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:09:21.518 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:09:21.518 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:09:21.518 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:09:21.518 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:21.518 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:21.518 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:09:21.518 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:21.518 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:21.518 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:21.518 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:21.518 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:21.518 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:21.518 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:21.518 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:21.518 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:21.518 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:21.518 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:09:21.518 14:46:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:23.421 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:23.421 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:09:23.421 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:23.421 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:23.421 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:23.421 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:23.421 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:23.421 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:09:23.421 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:23.421 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:09:23.421 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:09:23.421 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:09:23.421 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:09:23.421 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:09:23.421 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:09:23.421 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:23.421 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:23.421 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:23.421 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:23.421 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:23.421 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:23.421 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:23.421 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:23.421 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:23.421 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:23.421 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:23.421 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:23.421 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:23.421 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:23.421 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:23.421 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:23.421 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:23.421 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:23.421 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:23.421 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:23.421 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:23.421 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:23.421 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:23.421 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:23.421 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:23.421 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:23.421 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:23.421 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:23.421 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:23.421 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:23.421 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:23.421 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:23.421 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:23.421 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:23.421 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:23.421 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:23.421 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:23.422 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:23.422 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:23.422 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:23.422 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:23.422 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:23.422 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:23.422 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:23.422 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:23.422 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:23.422 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:23.422 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:23.422 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:23.422 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:23.422 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:23.422 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:23.422 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:23.422 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:23.422 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:23.422 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:23.422 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:23.422 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:23.422 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:09:23.422 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:23.422 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:23.681 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:23.681 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:23.681 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:23.681 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:23.681 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:23.681 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:23.681 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:23.681 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:23.681 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:23.681 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:23.681 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:23.681 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:23.681 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:23.681 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:23.681 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:23.681 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:23.681 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:23.681 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:23.681 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:23.681 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:23.681 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:23.681 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:23.681 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:23.681 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:23.681 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:23.681 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.342 ms 00:09:23.681 00:09:23.681 --- 10.0.0.2 ping statistics --- 00:09:23.681 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:23.681 rtt min/avg/max/mdev = 0.342/0.342/0.342/0.000 ms 00:09:23.681 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:23.681 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:23.681 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.174 ms 00:09:23.681 00:09:23.681 --- 10.0.0.1 ping statistics --- 00:09:23.681 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:23.681 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:09:23.681 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:23.681 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:09:23.681 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:23.681 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:23.681 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:23.681 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:23.681 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:23.681 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:23.681 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:23.681 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:09:23.681 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:09:23.681 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:23.681 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:23.940 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:09:23.940 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:09:23.940 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=605033 00:09:23.940 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:09:23.940 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:23.940 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 605033 00:09:23.940 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 605033 ']' 00:09:23.940 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:23.940 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:23.940 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:23.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:23.940 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:23.940 14:46:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:24.874 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:24.874 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:09:24.874 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:09:24.874 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:24.874 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:24.874 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:24.874 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.874 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:24.874 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.874 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:09:24.874 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.874 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:25.132 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.132 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:09:25.132 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:25.132 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.132 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:25.132 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.132 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:09:25.132 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:25.132 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.132 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:25.132 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.132 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:25.132 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.132 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:25.132 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.132 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:09:25.132 14:46:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:09:35.100 Initializing NVMe Controllers 00:09:35.100 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:35.100 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:35.100 Initialization complete. Launching workers. 00:09:35.100 ======================================================== 00:09:35.100 Latency(us) 00:09:35.100 Device Information : IOPS MiB/s Average min max 00:09:35.100 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14629.46 57.15 4374.20 903.95 16300.83 00:09:35.100 ======================================================== 00:09:35.100 Total : 14629.46 57.15 4374.20 903.95 16300.83 00:09:35.100 00:09:35.358 14:46:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:09:35.358 14:46:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:09:35.358 14:46:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:35.358 14:46:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:09:35.358 14:46:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:35.358 14:46:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:09:35.358 14:46:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:35.358 14:46:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:35.358 rmmod nvme_tcp 00:09:35.358 rmmod nvme_fabrics 00:09:35.358 rmmod nvme_keyring 00:09:35.358 14:46:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:35.358 14:46:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:09:35.358 14:46:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:09:35.358 14:46:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 605033 ']' 00:09:35.358 14:46:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 605033 00:09:35.358 14:46:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 605033 ']' 00:09:35.358 14:46:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 605033 00:09:35.358 14:46:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:09:35.358 14:46:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:35.358 14:46:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 605033 00:09:35.358 14:46:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:09:35.358 14:46:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:09:35.358 14:46:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 605033' 00:09:35.358 killing process with pid 605033 00:09:35.358 14:46:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 605033 00:09:35.358 14:46:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 605033 00:09:35.618 nvmf threads initialize successfully 00:09:35.618 bdev subsystem init successfully 00:09:35.618 created a nvmf target service 00:09:35.618 create targets's poll groups done 00:09:35.618 all subsystems of target started 00:09:35.618 nvmf target is running 00:09:35.618 all subsystems of target stopped 00:09:35.618 destroy targets's poll groups done 00:09:35.618 destroyed the nvmf target service 00:09:35.618 bdev subsystem finish successfully 00:09:35.618 nvmf threads destroy successfully 00:09:35.618 14:46:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:35.618 14:46:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:35.618 14:46:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:35.618 14:46:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:09:35.618 14:46:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:09:35.618 14:46:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:35.618 14:46:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:09:35.618 14:46:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:35.618 14:46:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:35.618 14:46:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:35.618 14:46:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:35.618 14:46:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:38.163 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:38.163 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:09:38.163 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:38.164 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:38.164 00:09:38.164 real 0m16.293s 00:09:38.164 user 0m46.040s 00:09:38.164 sys 0m3.311s 00:09:38.164 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:38.164 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:38.164 ************************************ 00:09:38.164 END TEST nvmf_example 00:09:38.164 ************************************ 00:09:38.164 14:46:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:38.164 14:46:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:38.164 14:46:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:38.164 14:46:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:38.164 ************************************ 00:09:38.164 START TEST nvmf_filesystem 00:09:38.164 ************************************ 00:09:38.164 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:38.164 * Looking for test storage... 00:09:38.164 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:38.164 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:38.164 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:09:38.164 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:38.164 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:38.164 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:38.164 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:38.164 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:38.164 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:09:38.164 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:09:38.164 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:09:38.164 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:09:38.164 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:09:38.164 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:09:38.164 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:09:38.164 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:38.164 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:09:38.164 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:09:38.164 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:38.164 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:38.164 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:09:38.164 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:09:38.164 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:38.164 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:09:38.164 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:09:38.164 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:09:38.164 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:09:38.164 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:38.164 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:09:38.164 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:09:38.164 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:38.164 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:38.164 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:09:38.164 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:38.164 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:38.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.164 --rc genhtml_branch_coverage=1 00:09:38.164 --rc genhtml_function_coverage=1 00:09:38.164 --rc genhtml_legend=1 00:09:38.164 --rc geninfo_all_blocks=1 00:09:38.164 --rc geninfo_unexecuted_blocks=1 00:09:38.164 00:09:38.164 ' 00:09:38.164 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:38.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.164 --rc genhtml_branch_coverage=1 00:09:38.164 --rc genhtml_function_coverage=1 00:09:38.164 --rc genhtml_legend=1 00:09:38.164 --rc geninfo_all_blocks=1 00:09:38.164 --rc geninfo_unexecuted_blocks=1 00:09:38.164 00:09:38.164 ' 00:09:38.164 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:38.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.164 --rc genhtml_branch_coverage=1 00:09:38.164 --rc genhtml_function_coverage=1 00:09:38.164 --rc genhtml_legend=1 00:09:38.164 --rc geninfo_all_blocks=1 00:09:38.164 --rc geninfo_unexecuted_blocks=1 00:09:38.164 00:09:38.164 ' 00:09:38.164 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:38.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.164 --rc genhtml_branch_coverage=1 00:09:38.164 --rc genhtml_function_coverage=1 00:09:38.164 --rc genhtml_legend=1 00:09:38.164 --rc geninfo_all_blocks=1 00:09:38.164 --rc geninfo_unexecuted_blocks=1 00:09:38.164 00:09:38.164 ' 00:09:38.164 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:09:38.164 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:09:38.164 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:09:38.164 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:09:38.164 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:09:38.164 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:09:38.164 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:09:38.164 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:09:38.164 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:09:38.164 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:09:38.164 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:09:38.164 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:09:38.164 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:09:38.164 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:09:38.164 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:09:38.164 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:09:38.164 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:09:38.164 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:09:38.164 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:09:38.164 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:09:38.164 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:09:38.164 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:09:38.164 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:09:38.164 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:09:38.164 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:09:38.164 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:09:38.164 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:09:38.164 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:09:38.164 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:09:38.164 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:09:38.164 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:09:38.164 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:09:38.164 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:09:38.164 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:09:38.164 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:09:38.164 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:09:38.164 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:09:38.164 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:09:38.164 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:09:38.164 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:09:38.164 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:09:38.165 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:09:38.165 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:09:38.165 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:09:38.165 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:09:38.165 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:09:38.165 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:09:38.165 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:09:38.165 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:09:38.165 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:09:38.165 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:09:38.165 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:09:38.165 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:09:38.165 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:09:38.165 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:09:38.165 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:09:38.165 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:09:38.165 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:09:38.165 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:09:38.165 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:09:38.165 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:09:38.165 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:09:38.165 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:09:38.165 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:09:38.165 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:09:38.165 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:09:38.165 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:09:38.165 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:09:38.165 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:09:38.165 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:09:38.165 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:09:38.165 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:09:38.165 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:09:38.165 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:09:38.165 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:09:38.165 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:09:38.165 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:09:38.165 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:09:38.165 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:09:38.165 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:09:38.165 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:09:38.165 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:09:38.165 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:09:38.165 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:09:38.165 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:09:38.165 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:09:38.165 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:09:38.165 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:09:38.165 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:09:38.165 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:09:38.165 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:09:38.165 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:09:38.165 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:09:38.165 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:09:38.165 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:09:38.165 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:09:38.165 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:09:38.165 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:09:38.165 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:09:38.165 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:09:38.165 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:09:38.165 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:09:38.165 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:09:38.165 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:38.165 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:38.165 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:09:38.165 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:38.165 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:09:38.165 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:09:38.165 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:09:38.165 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:09:38.165 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:09:38.165 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:09:38.165 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:09:38.165 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:09:38.165 #define SPDK_CONFIG_H 00:09:38.165 #define SPDK_CONFIG_AIO_FSDEV 1 00:09:38.165 #define SPDK_CONFIG_APPS 1 00:09:38.165 #define SPDK_CONFIG_ARCH native 00:09:38.165 #undef SPDK_CONFIG_ASAN 00:09:38.165 #undef SPDK_CONFIG_AVAHI 00:09:38.165 #undef SPDK_CONFIG_CET 00:09:38.165 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:09:38.165 #define SPDK_CONFIG_COVERAGE 1 00:09:38.165 #define SPDK_CONFIG_CROSS_PREFIX 00:09:38.165 #undef SPDK_CONFIG_CRYPTO 00:09:38.165 #undef SPDK_CONFIG_CRYPTO_MLX5 00:09:38.165 #undef SPDK_CONFIG_CUSTOMOCF 00:09:38.165 #undef SPDK_CONFIG_DAOS 00:09:38.165 #define SPDK_CONFIG_DAOS_DIR 00:09:38.165 #define SPDK_CONFIG_DEBUG 1 00:09:38.165 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:09:38.165 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:09:38.165 #define SPDK_CONFIG_DPDK_INC_DIR 00:09:38.165 #define SPDK_CONFIG_DPDK_LIB_DIR 00:09:38.165 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:09:38.165 #undef SPDK_CONFIG_DPDK_UADK 00:09:38.165 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:09:38.165 #define SPDK_CONFIG_EXAMPLES 1 00:09:38.165 #undef SPDK_CONFIG_FC 00:09:38.165 #define SPDK_CONFIG_FC_PATH 00:09:38.165 #define SPDK_CONFIG_FIO_PLUGIN 1 00:09:38.165 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:09:38.165 #define SPDK_CONFIG_FSDEV 1 00:09:38.165 #undef SPDK_CONFIG_FUSE 00:09:38.165 #undef SPDK_CONFIG_FUZZER 00:09:38.165 #define SPDK_CONFIG_FUZZER_LIB 00:09:38.165 #undef SPDK_CONFIG_GOLANG 00:09:38.165 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:09:38.165 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:09:38.165 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:09:38.165 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:09:38.165 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:09:38.165 #undef SPDK_CONFIG_HAVE_LIBBSD 00:09:38.165 #undef SPDK_CONFIG_HAVE_LZ4 00:09:38.165 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:09:38.165 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:09:38.165 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:09:38.165 #define SPDK_CONFIG_IDXD 1 00:09:38.165 #define SPDK_CONFIG_IDXD_KERNEL 1 00:09:38.165 #undef SPDK_CONFIG_IPSEC_MB 00:09:38.165 #define SPDK_CONFIG_IPSEC_MB_DIR 00:09:38.165 #define SPDK_CONFIG_ISAL 1 00:09:38.165 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:09:38.165 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:09:38.165 #define SPDK_CONFIG_LIBDIR 00:09:38.165 #undef SPDK_CONFIG_LTO 00:09:38.165 #define SPDK_CONFIG_MAX_LCORES 128 00:09:38.165 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:09:38.165 #define SPDK_CONFIG_NVME_CUSE 1 00:09:38.165 #undef SPDK_CONFIG_OCF 00:09:38.165 #define SPDK_CONFIG_OCF_PATH 00:09:38.165 #define SPDK_CONFIG_OPENSSL_PATH 00:09:38.165 #undef SPDK_CONFIG_PGO_CAPTURE 00:09:38.165 #define SPDK_CONFIG_PGO_DIR 00:09:38.165 #undef SPDK_CONFIG_PGO_USE 00:09:38.165 #define SPDK_CONFIG_PREFIX /usr/local 00:09:38.165 #undef SPDK_CONFIG_RAID5F 00:09:38.165 #undef SPDK_CONFIG_RBD 00:09:38.165 #define SPDK_CONFIG_RDMA 1 00:09:38.166 #define SPDK_CONFIG_RDMA_PROV verbs 00:09:38.166 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:09:38.166 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:09:38.166 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:09:38.166 #define SPDK_CONFIG_SHARED 1 00:09:38.166 #undef SPDK_CONFIG_SMA 00:09:38.166 #define SPDK_CONFIG_TESTS 1 00:09:38.166 #undef SPDK_CONFIG_TSAN 00:09:38.166 #define SPDK_CONFIG_UBLK 1 00:09:38.166 #define SPDK_CONFIG_UBSAN 1 00:09:38.166 #undef SPDK_CONFIG_UNIT_TESTS 00:09:38.166 #undef SPDK_CONFIG_URING 00:09:38.166 #define SPDK_CONFIG_URING_PATH 00:09:38.166 #undef SPDK_CONFIG_URING_ZNS 00:09:38.166 #undef SPDK_CONFIG_USDT 00:09:38.166 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:09:38.166 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:09:38.166 #define SPDK_CONFIG_VFIO_USER 1 00:09:38.166 #define SPDK_CONFIG_VFIO_USER_DIR 00:09:38.166 #define SPDK_CONFIG_VHOST 1 00:09:38.166 #define SPDK_CONFIG_VIRTIO 1 00:09:38.166 #undef SPDK_CONFIG_VTUNE 00:09:38.166 #define SPDK_CONFIG_VTUNE_DIR 00:09:38.166 #define SPDK_CONFIG_WERROR 1 00:09:38.166 #define SPDK_CONFIG_WPDK_DIR 00:09:38.166 #undef SPDK_CONFIG_XNVME 00:09:38.166 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:09:38.166 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:09:38.166 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:38.166 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:09:38.166 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:38.166 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:38.166 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:38.166 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.166 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.166 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.166 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:38.166 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.166 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:09:38.166 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:09:38.166 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:09:38.166 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:09:38.166 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:09:38.166 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:38.166 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:09:38.166 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:09:38.166 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:09:38.166 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:09:38.166 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:09:38.166 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:09:38.166 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:09:38.166 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:09:38.166 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:09:38.166 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:09:38.166 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:09:38.166 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:09:38.166 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:09:38.166 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:09:38.166 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:09:38.166 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:09:38.166 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:09:38.166 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:09:38.166 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:09:38.166 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:09:38.166 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:09:38.166 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:09:38.166 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:09:38.166 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:09:38.166 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:09:38.166 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:09:38.166 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:09:38.166 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:09:38.166 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:09:38.166 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:09:38.166 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:09:38.166 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:09:38.166 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:09:38.166 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:09:38.166 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:09:38.166 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:09:38.166 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:09:38.166 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:09:38.166 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:09:38.166 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:09:38.166 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:09:38.166 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:09:38.166 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:09:38.166 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:09:38.166 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:09:38.166 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:09:38.166 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:09:38.166 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:09:38.166 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:09:38.166 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:09:38.166 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:09:38.166 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:09:38.166 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:09:38.166 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:09:38.166 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:09:38.166 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:09:38.166 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:09:38.166 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:09:38.166 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:09:38.166 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:09:38.166 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:09:38.166 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:09:38.166 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:09:38.167 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:09:38.167 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:09:38.167 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:09:38.167 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:09:38.167 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:09:38.167 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:09:38.167 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:09:38.167 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:09:38.167 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:09:38.167 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:09:38.167 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:09:38.167 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:09:38.167 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:09:38.167 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:09:38.167 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:09:38.167 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:09:38.167 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:09:38.167 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:09:38.167 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:09:38.167 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:09:38.167 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:09:38.167 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:09:38.167 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:09:38.167 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:09:38.167 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:09:38.167 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:09:38.167 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:09:38.167 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:09:38.167 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:09:38.167 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:09:38.167 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:09:38.167 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:09:38.167 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:09:38.167 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:09:38.167 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:09:38.167 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:09:38.167 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:09:38.167 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:09:38.167 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:09:38.167 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:09:38.167 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:09:38.167 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:09:38.167 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:09:38.167 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:09:38.167 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:09:38.167 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:09:38.167 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:09:38.167 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:09:38.167 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:09:38.167 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:09:38.167 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:09:38.167 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:09:38.167 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:09:38.167 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:09:38.167 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:09:38.167 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:09:38.167 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:09:38.167 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:09:38.167 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:09:38.167 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:09:38.167 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:09:38.167 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:09:38.167 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:09:38.167 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:09:38.167 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:09:38.167 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:09:38.167 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:09:38.167 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:09:38.167 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:09:38.167 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:09:38.167 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:09:38.167 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:09:38.167 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:09:38.167 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:09:38.167 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:09:38.167 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:09:38.167 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:09:38.167 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:09:38.167 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:09:38.167 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:09:38.167 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:38.167 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:38.167 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:38.167 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:38.167 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:09:38.167 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:09:38.167 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:38.167 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:38.168 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:09:38.168 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:09:38.168 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:38.168 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:38.168 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:38.168 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:38.168 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:09:38.168 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:09:38.168 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:09:38.168 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:09:38.168 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:38.168 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:38.168 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:38.168 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:38.168 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:09:38.168 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:09:38.168 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:38.168 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:38.168 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:38.168 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:38.168 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:38.168 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:38.168 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:38.168 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:38.168 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:38.168 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:38.168 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:38.168 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:38.168 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:09:38.168 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:09:38.168 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:09:38.168 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:09:38.168 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:09:38.168 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:09:38.168 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:09:38.168 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:09:38.168 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:09:38.168 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:09:38.168 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:09:38.168 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:09:38.168 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:09:38.168 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:09:38.168 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:09:38.168 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:09:38.168 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:09:38.168 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j48 00:09:38.168 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:09:38.168 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:09:38.168 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:09:38.168 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:09:38.168 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:09:38.168 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:09:38.168 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:09:38.168 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 606730 ]] 00:09:38.168 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 606730 00:09:38.168 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:09:38.168 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:09:38.168 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:09:38.168 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:09:38.168 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:09:38.168 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:09:38.168 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:09:38.168 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:09:38.168 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.7VDh0D 00:09:38.168 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:09:38.168 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:09:38.168 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:09:38.168 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.7VDh0D/tests/target /tmp/spdk.7VDh0D 00:09:38.168 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:09:38.168 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:38.168 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:09:38.168 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:09:38.168 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:09:38.168 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:09:38.168 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:09:38.168 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:09:38.168 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:09:38.168 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:38.169 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:09:38.169 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:09:38.169 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=59486687232 00:09:38.169 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67273338880 00:09:38.169 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=7786651648 00:09:38.169 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:38.169 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:38.169 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:38.169 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=33626636288 00:09:38.169 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=33636667392 00:09:38.169 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:09:38.169 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:38.169 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:38.169 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:38.169 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=13432246272 00:09:38.169 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=13454667776 00:09:38.169 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=22421504 00:09:38.169 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:38.169 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:38.169 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:38.169 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=33636147200 00:09:38.169 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=33636671488 00:09:38.169 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=524288 00:09:38.169 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:38.169 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:38.169 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:38.169 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=6727319552 00:09:38.169 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=6727331840 00:09:38.169 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:09:38.169 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:38.169 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:09:38.169 * Looking for test storage... 00:09:38.169 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:09:38.169 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:09:38.169 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:38.169 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:09:38.169 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:09:38.169 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=59486687232 00:09:38.169 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:09:38.169 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:09:38.169 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:09:38.169 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:09:38.169 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:09:38.169 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=10001244160 00:09:38.169 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:09:38.169 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:38.169 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:38.169 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:38.169 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:38.169 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:09:38.169 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1698 -- # set -o errtrace 00:09:38.169 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:09:38.169 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:09:38.169 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:09:38.169 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # true 00:09:38.169 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # xtrace_fd 00:09:38.169 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:09:38.169 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:09:38.169 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:09:38.169 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:09:38.169 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:09:38.169 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:09:38.169 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:09:38.169 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:09:38.169 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:38.169 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:09:38.169 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:38.169 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:38.169 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:38.169 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:38.169 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:38.169 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:09:38.169 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:09:38.169 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:09:38.169 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:09:38.169 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:09:38.169 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:09:38.169 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:09:38.169 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:38.169 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:09:38.169 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:09:38.169 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:38.169 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:38.169 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:09:38.169 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:09:38.169 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:38.169 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:09:38.169 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:09:38.169 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:09:38.169 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:09:38.169 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:38.169 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:09:38.169 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:09:38.169 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:38.169 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:38.169 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:09:38.169 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:38.169 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:38.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.169 --rc genhtml_branch_coverage=1 00:09:38.169 --rc genhtml_function_coverage=1 00:09:38.169 --rc genhtml_legend=1 00:09:38.169 --rc geninfo_all_blocks=1 00:09:38.169 --rc geninfo_unexecuted_blocks=1 00:09:38.169 00:09:38.169 ' 00:09:38.169 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:38.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.169 --rc genhtml_branch_coverage=1 00:09:38.169 --rc genhtml_function_coverage=1 00:09:38.169 --rc genhtml_legend=1 00:09:38.169 --rc geninfo_all_blocks=1 00:09:38.169 --rc geninfo_unexecuted_blocks=1 00:09:38.169 00:09:38.169 ' 00:09:38.169 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:38.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.170 --rc genhtml_branch_coverage=1 00:09:38.170 --rc genhtml_function_coverage=1 00:09:38.170 --rc genhtml_legend=1 00:09:38.170 --rc geninfo_all_blocks=1 00:09:38.170 --rc geninfo_unexecuted_blocks=1 00:09:38.170 00:09:38.170 ' 00:09:38.170 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:38.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.170 --rc genhtml_branch_coverage=1 00:09:38.170 --rc genhtml_function_coverage=1 00:09:38.170 --rc genhtml_legend=1 00:09:38.170 --rc geninfo_all_blocks=1 00:09:38.170 --rc geninfo_unexecuted_blocks=1 00:09:38.170 00:09:38.170 ' 00:09:38.170 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:38.170 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:09:38.170 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:38.170 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:38.170 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:38.170 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:38.170 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:38.170 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:38.170 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:38.170 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:38.170 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:38.170 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:38.170 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:38.170 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:38.170 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:38.170 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:38.170 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:38.170 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:38.170 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:38.170 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:09:38.170 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:38.170 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:38.170 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:38.170 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.170 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.170 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.170 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:38.170 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.170 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:09:38.170 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:38.170 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:38.170 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:38.170 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:38.170 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:38.170 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:38.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:38.170 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:38.170 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:38.170 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:38.170 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:09:38.170 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:09:38.170 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:09:38.170 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:38.170 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:38.170 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:38.170 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:38.170 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:38.170 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:38.170 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:38.170 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:38.170 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:38.170 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:38.170 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:09:38.170 14:46:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:40.706 14:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:40.706 14:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:09:40.706 14:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:40.706 14:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:40.706 14:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:40.706 14:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:40.706 14:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:40.706 14:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:09:40.706 14:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:40.706 14:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:09:40.706 14:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:09:40.707 14:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:09:40.707 14:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:09:40.707 14:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:09:40.707 14:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:09:40.707 14:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:40.707 14:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:40.707 14:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:40.707 14:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:40.707 14:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:40.707 14:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:40.707 14:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:40.707 14:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:40.707 14:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:40.707 14:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:40.707 14:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:40.707 14:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:40.707 14:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:40.707 14:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:40.707 14:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:40.707 14:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:40.707 14:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:40.707 14:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:40.707 14:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:40.707 14:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:40.707 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:40.707 14:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:40.707 14:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:40.707 14:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:40.707 14:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:40.707 14:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:40.707 14:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:40.707 14:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:40.707 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:40.707 14:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:40.707 14:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:40.707 14:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:40.707 14:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:40.707 14:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:40.707 14:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:40.707 14:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:40.707 14:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:40.707 14:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:40.707 14:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:40.707 14:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:40.707 14:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:40.707 14:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:40.707 14:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:40.707 14:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:40.707 14:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:40.707 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:40.707 14:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:40.707 14:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:40.707 14:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:40.707 14:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:40.707 14:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:40.707 14:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:40.707 14:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:40.707 14:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:40.707 14:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:40.707 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:40.707 14:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:40.707 14:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:40.707 14:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:09:40.707 14:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:40.707 14:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:40.707 14:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:40.707 14:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:40.707 14:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:40.707 14:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:40.707 14:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:40.707 14:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:40.707 14:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:40.707 14:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:40.707 14:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:40.707 14:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:40.707 14:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:40.707 14:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:40.707 14:46:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:40.707 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:40.707 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:40.707 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:40.707 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:40.707 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:40.707 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:40.707 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:40.707 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:40.707 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:40.707 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:40.707 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:40.707 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:40.707 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:09:40.707 00:09:40.707 --- 10.0.0.2 ping statistics --- 00:09:40.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:40.707 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:09:40.707 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:40.707 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:40.707 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.174 ms 00:09:40.707 00:09:40.707 --- 10.0.0.1 ping statistics --- 00:09:40.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:40.707 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:09:40.707 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:40.707 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:09:40.707 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:40.707 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:40.707 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:40.707 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:40.707 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:40.707 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:40.707 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:40.707 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:09:40.707 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:40.707 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:40.707 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:40.707 ************************************ 00:09:40.708 START TEST nvmf_filesystem_no_in_capsule 00:09:40.708 ************************************ 00:09:40.708 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:09:40.708 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:09:40.708 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:09:40.708 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:40.708 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:40.708 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:40.708 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=608492 00:09:40.708 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:40.708 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 608492 00:09:40.708 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 608492 ']' 00:09:40.708 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:40.708 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:40.708 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:40.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:40.708 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:40.708 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:40.708 [2024-12-11 14:46:23.235763] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:09:40.708 [2024-12-11 14:46:23.235860] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:40.708 [2024-12-11 14:46:23.314393] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:40.708 [2024-12-11 14:46:23.374849] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:40.708 [2024-12-11 14:46:23.374927] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:40.708 [2024-12-11 14:46:23.374941] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:40.708 [2024-12-11 14:46:23.374952] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:40.708 [2024-12-11 14:46:23.374962] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:40.708 [2024-12-11 14:46:23.376555] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:40.708 [2024-12-11 14:46:23.376610] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:09:40.708 [2024-12-11 14:46:23.379565] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:09:40.708 [2024-12-11 14:46:23.379576] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:40.967 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:40.967 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:09:40.967 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:40.967 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:40.967 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:40.967 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:40.967 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:09:40.967 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:09:40.967 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.967 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:40.967 [2024-12-11 14:46:23.540037] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:40.967 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.967 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:09:40.967 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.967 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:40.967 Malloc1 00:09:40.967 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.967 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:40.967 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.967 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:40.967 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.967 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:40.967 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.967 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:40.967 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.967 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:40.967 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.967 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:40.967 [2024-12-11 14:46:23.735249] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:41.225 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.225 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:09:41.225 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:09:41.225 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:09:41.225 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:09:41.225 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:09:41.225 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:09:41.225 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.225 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:41.225 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.225 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:09:41.225 { 00:09:41.225 "name": "Malloc1", 00:09:41.225 "aliases": [ 00:09:41.225 "643c2514-b84c-409d-9ea5-cbc86614c20c" 00:09:41.225 ], 00:09:41.225 "product_name": "Malloc disk", 00:09:41.225 "block_size": 512, 00:09:41.225 "num_blocks": 1048576, 00:09:41.225 "uuid": "643c2514-b84c-409d-9ea5-cbc86614c20c", 00:09:41.225 "assigned_rate_limits": { 00:09:41.225 "rw_ios_per_sec": 0, 00:09:41.225 "rw_mbytes_per_sec": 0, 00:09:41.225 "r_mbytes_per_sec": 0, 00:09:41.225 "w_mbytes_per_sec": 0 00:09:41.225 }, 00:09:41.225 "claimed": true, 00:09:41.225 "claim_type": "exclusive_write", 00:09:41.225 "zoned": false, 00:09:41.225 "supported_io_types": { 00:09:41.225 "read": true, 00:09:41.225 "write": true, 00:09:41.225 "unmap": true, 00:09:41.225 "flush": true, 00:09:41.225 "reset": true, 00:09:41.225 "nvme_admin": false, 00:09:41.225 "nvme_io": false, 00:09:41.225 "nvme_io_md": false, 00:09:41.225 "write_zeroes": true, 00:09:41.225 "zcopy": true, 00:09:41.225 "get_zone_info": false, 00:09:41.225 "zone_management": false, 00:09:41.225 "zone_append": false, 00:09:41.225 "compare": false, 00:09:41.225 "compare_and_write": false, 00:09:41.225 "abort": true, 00:09:41.225 "seek_hole": false, 00:09:41.225 "seek_data": false, 00:09:41.225 "copy": true, 00:09:41.225 "nvme_iov_md": false 00:09:41.225 }, 00:09:41.225 "memory_domains": [ 00:09:41.225 { 00:09:41.225 "dma_device_id": "system", 00:09:41.225 "dma_device_type": 1 00:09:41.225 }, 00:09:41.225 { 00:09:41.225 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.225 "dma_device_type": 2 00:09:41.225 } 00:09:41.225 ], 00:09:41.225 "driver_specific": {} 00:09:41.225 } 00:09:41.225 ]' 00:09:41.225 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:09:41.225 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:09:41.225 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:09:41.225 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:09:41.225 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:09:41.225 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:09:41.225 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:09:41.225 14:46:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:41.791 14:46:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:09:41.791 14:46:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:09:41.791 14:46:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:41.791 14:46:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:41.791 14:46:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:09:44.320 14:46:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:44.320 14:46:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:44.320 14:46:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:44.320 14:46:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:09:44.320 14:46:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:44.320 14:46:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:09:44.320 14:46:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:09:44.320 14:46:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:09:44.320 14:46:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:09:44.320 14:46:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:09:44.320 14:46:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:09:44.320 14:46:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:09:44.320 14:46:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:09:44.320 14:46:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:09:44.320 14:46:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:09:44.320 14:46:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:09:44.320 14:46:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:09:44.320 14:46:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:09:44.578 14:46:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:09:45.511 14:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:09:45.512 14:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:09:45.512 14:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:45.512 14:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:45.512 14:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:45.770 ************************************ 00:09:45.770 START TEST filesystem_ext4 00:09:45.770 ************************************ 00:09:45.770 14:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:09:45.770 14:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:09:45.770 14:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:45.770 14:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:09:45.770 14:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:09:45.770 14:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:09:45.770 14:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:09:45.770 14:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:09:45.770 14:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:09:45.770 14:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:09:45.770 14:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:09:45.770 mke2fs 1.47.0 (5-Feb-2023) 00:09:45.770 Discarding device blocks: 0/522240 done 00:09:45.770 Creating filesystem with 522240 1k blocks and 130560 inodes 00:09:45.770 Filesystem UUID: a7f09c64-1c2e-40f6-ba6f-72de05f7e5dc 00:09:45.770 Superblock backups stored on blocks: 00:09:45.770 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:09:45.770 00:09:45.770 Allocating group tables: 0/64 done 00:09:45.770 Writing inode tables: 0/64 done 00:09:46.027 Creating journal (8192 blocks): done 00:09:46.027 Writing superblocks and filesystem accounting information: 0/64 done 00:09:46.027 00:09:46.027 14:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:09:46.027 14:46:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:51.353 14:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:51.353 14:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:09:51.353 14:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:51.353 14:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:09:51.353 14:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:09:51.353 14:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:51.353 14:46:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 608492 00:09:51.353 14:46:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:51.353 14:46:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:51.353 14:46:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:51.353 14:46:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:51.353 00:09:51.353 real 0m5.723s 00:09:51.353 user 0m0.023s 00:09:51.353 sys 0m0.063s 00:09:51.353 14:46:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:51.353 14:46:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:09:51.353 ************************************ 00:09:51.353 END TEST filesystem_ext4 00:09:51.353 ************************************ 00:09:51.353 14:46:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:09:51.353 14:46:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:51.353 14:46:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:51.353 14:46:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:51.353 ************************************ 00:09:51.353 START TEST filesystem_btrfs 00:09:51.353 ************************************ 00:09:51.353 14:46:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:09:51.353 14:46:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:09:51.353 14:46:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:51.353 14:46:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:09:51.353 14:46:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:09:51.353 14:46:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:09:51.353 14:46:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:09:51.353 14:46:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:09:51.353 14:46:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:09:51.353 14:46:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:09:51.353 14:46:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:09:51.611 btrfs-progs v6.8.1 00:09:51.611 See https://btrfs.readthedocs.io for more information. 00:09:51.611 00:09:51.611 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:09:51.611 NOTE: several default settings have changed in version 5.15, please make sure 00:09:51.611 this does not affect your deployments: 00:09:51.611 - DUP for metadata (-m dup) 00:09:51.611 - enabled no-holes (-O no-holes) 00:09:51.611 - enabled free-space-tree (-R free-space-tree) 00:09:51.611 00:09:51.611 Label: (null) 00:09:51.611 UUID: e338db9a-4c00-4638-b5b8-c6019d38b32e 00:09:51.611 Node size: 16384 00:09:51.611 Sector size: 4096 (CPU page size: 4096) 00:09:51.611 Filesystem size: 510.00MiB 00:09:51.611 Block group profiles: 00:09:51.611 Data: single 8.00MiB 00:09:51.611 Metadata: DUP 32.00MiB 00:09:51.611 System: DUP 8.00MiB 00:09:51.611 SSD detected: yes 00:09:51.611 Zoned device: no 00:09:51.611 Features: extref, skinny-metadata, no-holes, free-space-tree 00:09:51.611 Checksum: crc32c 00:09:51.611 Number of devices: 1 00:09:51.611 Devices: 00:09:51.611 ID SIZE PATH 00:09:51.611 1 510.00MiB /dev/nvme0n1p1 00:09:51.611 00:09:51.611 14:46:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:09:51.611 14:46:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:52.176 14:46:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:52.176 14:46:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:09:52.176 14:46:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:52.176 14:46:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:09:52.176 14:46:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:09:52.176 14:46:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:52.176 14:46:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 608492 00:09:52.176 14:46:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:52.176 14:46:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:52.176 14:46:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:52.176 14:46:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:52.176 00:09:52.176 real 0m0.637s 00:09:52.176 user 0m0.019s 00:09:52.176 sys 0m0.097s 00:09:52.176 14:46:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:52.176 14:46:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:09:52.176 ************************************ 00:09:52.176 END TEST filesystem_btrfs 00:09:52.176 ************************************ 00:09:52.176 14:46:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:09:52.176 14:46:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:52.176 14:46:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:52.176 14:46:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:52.176 ************************************ 00:09:52.176 START TEST filesystem_xfs 00:09:52.176 ************************************ 00:09:52.176 14:46:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:09:52.176 14:46:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:09:52.176 14:46:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:52.176 14:46:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:09:52.176 14:46:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:09:52.176 14:46:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:09:52.176 14:46:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:09:52.176 14:46:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:09:52.176 14:46:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:09:52.176 14:46:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:09:52.176 14:46:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:09:52.176 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:09:52.176 = sectsz=512 attr=2, projid32bit=1 00:09:52.176 = crc=1 finobt=1, sparse=1, rmapbt=0 00:09:52.176 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:09:52.176 data = bsize=4096 blocks=130560, imaxpct=25 00:09:52.176 = sunit=0 swidth=0 blks 00:09:52.176 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:09:52.176 log =internal log bsize=4096 blocks=16384, version=2 00:09:52.176 = sectsz=512 sunit=0 blks, lazy-count=1 00:09:52.176 realtime =none extsz=4096 blocks=0, rtextents=0 00:09:53.109 Discarding blocks...Done. 00:09:53.109 14:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:09:53.109 14:46:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:56.389 14:46:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:56.389 14:46:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:09:56.389 14:46:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:56.389 14:46:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:09:56.389 14:46:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:09:56.389 14:46:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:56.389 14:46:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 608492 00:09:56.389 14:46:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:56.389 14:46:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:56.389 14:46:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:56.389 14:46:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:56.389 00:09:56.389 real 0m3.795s 00:09:56.389 user 0m0.010s 00:09:56.389 sys 0m0.069s 00:09:56.389 14:46:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:56.389 14:46:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:09:56.389 ************************************ 00:09:56.389 END TEST filesystem_xfs 00:09:56.389 ************************************ 00:09:56.389 14:46:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:09:56.389 14:46:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:09:56.389 14:46:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:56.389 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:56.389 14:46:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:56.389 14:46:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:09:56.390 14:46:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:56.390 14:46:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:56.390 14:46:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:56.390 14:46:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:56.390 14:46:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:09:56.390 14:46:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:56.390 14:46:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.390 14:46:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:56.390 14:46:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.390 14:46:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:09:56.390 14:46:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 608492 00:09:56.390 14:46:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 608492 ']' 00:09:56.390 14:46:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 608492 00:09:56.390 14:46:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:09:56.390 14:46:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:56.390 14:46:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 608492 00:09:56.390 14:46:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:56.390 14:46:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:56.390 14:46:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 608492' 00:09:56.390 killing process with pid 608492 00:09:56.390 14:46:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 608492 00:09:56.390 14:46:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 608492 00:09:56.648 14:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:09:56.648 00:09:56.648 real 0m16.014s 00:09:56.648 user 1m1.958s 00:09:56.648 sys 0m2.027s 00:09:56.648 14:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:56.648 14:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:56.648 ************************************ 00:09:56.648 END TEST nvmf_filesystem_no_in_capsule 00:09:56.648 ************************************ 00:09:56.648 14:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:09:56.648 14:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:56.648 14:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:56.648 14:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:56.648 ************************************ 00:09:56.648 START TEST nvmf_filesystem_in_capsule 00:09:56.648 ************************************ 00:09:56.648 14:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:09:56.648 14:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:09:56.648 14:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:09:56.648 14:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:56.648 14:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:56.648 14:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:56.648 14:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=610594 00:09:56.648 14:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:56.648 14:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 610594 00:09:56.648 14:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 610594 ']' 00:09:56.648 14:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:56.648 14:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:56.648 14:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:56.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:56.648 14:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:56.648 14:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:56.648 [2024-12-11 14:46:39.303348] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:09:56.648 [2024-12-11 14:46:39.303429] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:56.648 [2024-12-11 14:46:39.377493] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:56.907 [2024-12-11 14:46:39.435702] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:56.907 [2024-12-11 14:46:39.435766] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:56.907 [2024-12-11 14:46:39.435780] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:56.907 [2024-12-11 14:46:39.435791] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:56.907 [2024-12-11 14:46:39.435801] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:56.907 [2024-12-11 14:46:39.437264] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:56.907 [2024-12-11 14:46:39.437324] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:09:56.907 [2024-12-11 14:46:39.437394] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:09:56.907 [2024-12-11 14:46:39.437398] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.907 14:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:56.907 14:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:09:56.907 14:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:56.907 14:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:56.907 14:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:56.907 14:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:56.907 14:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:09:56.907 14:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:09:56.907 14:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.907 14:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:56.907 [2024-12-11 14:46:39.590342] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:56.907 14:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.907 14:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:09:56.907 14:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.907 14:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:57.166 Malloc1 00:09:57.166 14:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.166 14:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:57.166 14:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.166 14:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:57.166 14:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.166 14:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:57.166 14:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.166 14:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:57.166 14:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.166 14:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:57.166 14:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.166 14:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:57.166 [2024-12-11 14:46:39.796492] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:57.166 14:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.166 14:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:09:57.166 14:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:09:57.166 14:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:09:57.166 14:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:09:57.166 14:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:09:57.166 14:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:09:57.166 14:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.166 14:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:57.166 14:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.166 14:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:09:57.166 { 00:09:57.166 "name": "Malloc1", 00:09:57.166 "aliases": [ 00:09:57.166 "97b7fb5c-c753-47d3-bd33-632506de52d7" 00:09:57.166 ], 00:09:57.166 "product_name": "Malloc disk", 00:09:57.166 "block_size": 512, 00:09:57.166 "num_blocks": 1048576, 00:09:57.166 "uuid": "97b7fb5c-c753-47d3-bd33-632506de52d7", 00:09:57.166 "assigned_rate_limits": { 00:09:57.166 "rw_ios_per_sec": 0, 00:09:57.166 "rw_mbytes_per_sec": 0, 00:09:57.166 "r_mbytes_per_sec": 0, 00:09:57.166 "w_mbytes_per_sec": 0 00:09:57.166 }, 00:09:57.166 "claimed": true, 00:09:57.166 "claim_type": "exclusive_write", 00:09:57.166 "zoned": false, 00:09:57.166 "supported_io_types": { 00:09:57.166 "read": true, 00:09:57.166 "write": true, 00:09:57.166 "unmap": true, 00:09:57.166 "flush": true, 00:09:57.166 "reset": true, 00:09:57.166 "nvme_admin": false, 00:09:57.166 "nvme_io": false, 00:09:57.166 "nvme_io_md": false, 00:09:57.166 "write_zeroes": true, 00:09:57.166 "zcopy": true, 00:09:57.166 "get_zone_info": false, 00:09:57.166 "zone_management": false, 00:09:57.166 "zone_append": false, 00:09:57.166 "compare": false, 00:09:57.166 "compare_and_write": false, 00:09:57.166 "abort": true, 00:09:57.166 "seek_hole": false, 00:09:57.166 "seek_data": false, 00:09:57.166 "copy": true, 00:09:57.166 "nvme_iov_md": false 00:09:57.166 }, 00:09:57.166 "memory_domains": [ 00:09:57.166 { 00:09:57.166 "dma_device_id": "system", 00:09:57.166 "dma_device_type": 1 00:09:57.166 }, 00:09:57.166 { 00:09:57.166 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.166 "dma_device_type": 2 00:09:57.166 } 00:09:57.166 ], 00:09:57.166 "driver_specific": {} 00:09:57.166 } 00:09:57.166 ]' 00:09:57.166 14:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:09:57.166 14:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:09:57.166 14:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:09:57.166 14:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:09:57.166 14:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:09:57.166 14:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:09:57.166 14:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:09:57.166 14:46:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:58.100 14:46:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:09:58.100 14:46:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:09:58.100 14:46:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:58.100 14:46:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:58.100 14:46:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:09:59.998 14:46:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:59.998 14:46:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:59.998 14:46:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:59.998 14:46:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:09:59.998 14:46:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:59.998 14:46:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:09:59.998 14:46:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:09:59.998 14:46:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:09:59.998 14:46:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:09:59.998 14:46:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:09:59.998 14:46:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:09:59.998 14:46:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:09:59.998 14:46:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:09:59.998 14:46:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:09:59.998 14:46:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:09:59.998 14:46:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:09:59.998 14:46:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:00.256 14:46:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:00.821 14:46:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:01.755 14:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:10:01.755 14:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:01.755 14:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:01.755 14:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:01.755 14:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:01.755 ************************************ 00:10:01.755 START TEST filesystem_in_capsule_ext4 00:10:01.755 ************************************ 00:10:01.755 14:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:01.755 14:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:01.755 14:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:01.755 14:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:01.755 14:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:10:01.755 14:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:01.755 14:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:10:01.755 14:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:10:01.755 14:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:10:01.755 14:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:10:01.755 14:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:01.755 mke2fs 1.47.0 (5-Feb-2023) 00:10:02.013 Discarding device blocks: 0/522240 done 00:10:02.013 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:02.013 Filesystem UUID: 2b594b4b-fb42-4abb-aa94-7cbf2889154b 00:10:02.013 Superblock backups stored on blocks: 00:10:02.013 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:02.013 00:10:02.013 Allocating group tables: 0/64 done 00:10:02.013 Writing inode tables: 0/64 done 00:10:02.013 Creating journal (8192 blocks): done 00:10:02.013 Writing superblocks and filesystem accounting information: 0/64 done 00:10:02.013 00:10:02.013 14:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:10:02.013 14:46:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:07.272 14:46:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:07.530 14:46:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:10:07.530 14:46:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:07.530 14:46:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:10:07.530 14:46:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:07.530 14:46:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:07.530 14:46:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 610594 00:10:07.530 14:46:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:07.530 14:46:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:07.530 14:46:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:07.530 14:46:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:07.530 00:10:07.530 real 0m5.600s 00:10:07.530 user 0m0.012s 00:10:07.530 sys 0m0.064s 00:10:07.530 14:46:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:07.530 14:46:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:07.530 ************************************ 00:10:07.530 END TEST filesystem_in_capsule_ext4 00:10:07.530 ************************************ 00:10:07.530 14:46:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:07.530 14:46:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:07.530 14:46:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:07.530 14:46:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:07.530 ************************************ 00:10:07.530 START TEST filesystem_in_capsule_btrfs 00:10:07.530 ************************************ 00:10:07.530 14:46:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:07.530 14:46:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:07.530 14:46:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:07.530 14:46:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:07.530 14:46:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:10:07.530 14:46:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:07.530 14:46:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:10:07.530 14:46:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:10:07.530 14:46:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:10:07.530 14:46:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:10:07.530 14:46:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:07.788 btrfs-progs v6.8.1 00:10:07.788 See https://btrfs.readthedocs.io for more information. 00:10:07.788 00:10:07.788 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:07.788 NOTE: several default settings have changed in version 5.15, please make sure 00:10:07.788 this does not affect your deployments: 00:10:07.788 - DUP for metadata (-m dup) 00:10:07.788 - enabled no-holes (-O no-holes) 00:10:07.788 - enabled free-space-tree (-R free-space-tree) 00:10:07.788 00:10:07.788 Label: (null) 00:10:07.788 UUID: a7ae74cb-c485-4f4a-9d81-9e22ff1ff7a9 00:10:07.788 Node size: 16384 00:10:07.788 Sector size: 4096 (CPU page size: 4096) 00:10:07.788 Filesystem size: 510.00MiB 00:10:07.788 Block group profiles: 00:10:07.788 Data: single 8.00MiB 00:10:07.788 Metadata: DUP 32.00MiB 00:10:07.788 System: DUP 8.00MiB 00:10:07.788 SSD detected: yes 00:10:07.788 Zoned device: no 00:10:07.788 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:07.788 Checksum: crc32c 00:10:07.788 Number of devices: 1 00:10:07.788 Devices: 00:10:07.788 ID SIZE PATH 00:10:07.788 1 510.00MiB /dev/nvme0n1p1 00:10:07.788 00:10:07.788 14:46:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:10:07.788 14:46:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:08.720 14:46:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:08.720 14:46:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:10:08.720 14:46:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:08.720 14:46:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:10:08.720 14:46:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:08.720 14:46:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:08.720 14:46:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 610594 00:10:08.720 14:46:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:08.720 14:46:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:08.720 14:46:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:08.721 14:46:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:08.721 00:10:08.721 real 0m1.314s 00:10:08.721 user 0m0.026s 00:10:08.721 sys 0m0.097s 00:10:08.721 14:46:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:08.721 14:46:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:08.721 ************************************ 00:10:08.721 END TEST filesystem_in_capsule_btrfs 00:10:08.721 ************************************ 00:10:08.979 14:46:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:10:08.979 14:46:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:08.979 14:46:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:08.979 14:46:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:08.979 ************************************ 00:10:08.979 START TEST filesystem_in_capsule_xfs 00:10:08.979 ************************************ 00:10:08.979 14:46:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:10:08.979 14:46:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:08.979 14:46:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:08.979 14:46:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:08.979 14:46:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:10:08.979 14:46:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:08.979 14:46:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:10:08.979 14:46:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:10:08.979 14:46:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:10:08.979 14:46:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:10:08.979 14:46:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:08.979 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:08.979 = sectsz=512 attr=2, projid32bit=1 00:10:08.979 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:08.979 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:08.979 data = bsize=4096 blocks=130560, imaxpct=25 00:10:08.979 = sunit=0 swidth=0 blks 00:10:08.979 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:08.979 log =internal log bsize=4096 blocks=16384, version=2 00:10:08.979 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:08.979 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:09.545 Discarding blocks...Done. 00:10:09.545 14:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:10:09.545 14:46:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:11.441 14:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:11.441 14:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:10:11.441 14:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:11.441 14:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:10:11.441 14:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:10:11.441 14:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:11.441 14:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 610594 00:10:11.441 14:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:11.441 14:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:11.441 14:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:11.441 14:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:11.441 00:10:11.441 real 0m2.616s 00:10:11.441 user 0m0.015s 00:10:11.441 sys 0m0.056s 00:10:11.441 14:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:11.441 14:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:11.441 ************************************ 00:10:11.441 END TEST filesystem_in_capsule_xfs 00:10:11.441 ************************************ 00:10:11.441 14:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:12.007 14:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:12.007 14:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:12.007 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:12.007 14:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:12.007 14:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:10:12.007 14:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:12.007 14:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:12.007 14:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:12.007 14:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:12.007 14:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:10:12.007 14:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:12.007 14:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.007 14:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:12.007 14:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.007 14:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:12.007 14:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 610594 00:10:12.007 14:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 610594 ']' 00:10:12.007 14:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 610594 00:10:12.007 14:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:10:12.007 14:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:12.007 14:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 610594 00:10:12.007 14:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:12.007 14:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:12.007 14:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 610594' 00:10:12.007 killing process with pid 610594 00:10:12.007 14:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 610594 00:10:12.007 14:46:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 610594 00:10:12.572 14:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:12.572 00:10:12.572 real 0m15.830s 00:10:12.572 user 1m1.280s 00:10:12.572 sys 0m2.002s 00:10:12.572 14:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:12.572 14:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:12.572 ************************************ 00:10:12.572 END TEST nvmf_filesystem_in_capsule 00:10:12.572 ************************************ 00:10:12.572 14:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:10:12.572 14:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:12.572 14:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:10:12.572 14:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:12.572 14:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:10:12.572 14:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:12.572 14:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:12.572 rmmod nvme_tcp 00:10:12.572 rmmod nvme_fabrics 00:10:12.572 rmmod nvme_keyring 00:10:12.572 14:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:12.572 14:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:10:12.572 14:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:10:12.572 14:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:12.572 14:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:12.572 14:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:12.572 14:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:12.572 14:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:10:12.572 14:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:10:12.572 14:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:12.572 14:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:10:12.572 14:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:12.572 14:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:12.572 14:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:12.572 14:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:12.572 14:46:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:14.482 14:46:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:14.482 00:10:14.482 real 0m36.846s 00:10:14.482 user 2m4.397s 00:10:14.482 sys 0m5.869s 00:10:14.482 14:46:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:14.482 14:46:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:14.482 ************************************ 00:10:14.482 END TEST nvmf_filesystem 00:10:14.482 ************************************ 00:10:14.742 14:46:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:14.742 14:46:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:14.742 14:46:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:14.742 14:46:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:14.742 ************************************ 00:10:14.742 START TEST nvmf_target_discovery 00:10:14.742 ************************************ 00:10:14.742 14:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:14.742 * Looking for test storage... 00:10:14.742 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:14.742 14:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:14.742 14:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:10:14.742 14:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:14.742 14:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:14.742 14:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:14.742 14:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:14.742 14:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:14.742 14:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:10:14.742 14:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:10:14.742 14:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:10:14.742 14:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:10:14.742 14:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:10:14.742 14:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:10:14.742 14:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:10:14.742 14:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:14.742 14:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:10:14.742 14:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:10:14.742 14:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:14.742 14:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:14.742 14:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:10:14.742 14:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:10:14.742 14:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:14.742 14:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:10:14.742 14:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:10:14.742 14:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:10:14.742 14:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:10:14.742 14:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:14.742 14:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:10:14.742 14:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:10:14.742 14:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:14.742 14:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:14.742 14:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:10:14.742 14:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:14.742 14:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:14.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.742 --rc genhtml_branch_coverage=1 00:10:14.742 --rc genhtml_function_coverage=1 00:10:14.742 --rc genhtml_legend=1 00:10:14.742 --rc geninfo_all_blocks=1 00:10:14.742 --rc geninfo_unexecuted_blocks=1 00:10:14.742 00:10:14.742 ' 00:10:14.742 14:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:14.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.742 --rc genhtml_branch_coverage=1 00:10:14.742 --rc genhtml_function_coverage=1 00:10:14.742 --rc genhtml_legend=1 00:10:14.742 --rc geninfo_all_blocks=1 00:10:14.742 --rc geninfo_unexecuted_blocks=1 00:10:14.742 00:10:14.742 ' 00:10:14.742 14:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:14.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.742 --rc genhtml_branch_coverage=1 00:10:14.742 --rc genhtml_function_coverage=1 00:10:14.742 --rc genhtml_legend=1 00:10:14.742 --rc geninfo_all_blocks=1 00:10:14.742 --rc geninfo_unexecuted_blocks=1 00:10:14.742 00:10:14.742 ' 00:10:14.742 14:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:14.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.742 --rc genhtml_branch_coverage=1 00:10:14.742 --rc genhtml_function_coverage=1 00:10:14.742 --rc genhtml_legend=1 00:10:14.742 --rc geninfo_all_blocks=1 00:10:14.742 --rc geninfo_unexecuted_blocks=1 00:10:14.742 00:10:14.742 ' 00:10:14.742 14:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:14.742 14:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:10:14.742 14:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:14.742 14:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:14.742 14:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:14.742 14:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:14.743 14:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:14.743 14:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:14.743 14:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:14.743 14:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:14.743 14:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:14.743 14:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:14.743 14:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:14.743 14:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:14.743 14:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:14.743 14:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:14.743 14:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:14.743 14:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:14.743 14:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:14.743 14:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:10:14.743 14:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:14.743 14:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:14.743 14:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:14.743 14:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.743 14:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.743 14:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.743 14:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:10:14.743 14:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.743 14:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:10:14.743 14:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:14.743 14:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:14.743 14:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:14.743 14:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:14.743 14:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:14.743 14:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:14.743 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:14.743 14:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:14.743 14:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:14.743 14:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:14.743 14:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:10:14.743 14:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:10:14.743 14:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:10:14.743 14:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:10:14.743 14:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:10:14.743 14:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:14.743 14:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:14.743 14:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:14.743 14:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:14.743 14:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:14.743 14:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:14.743 14:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:14.743 14:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:14.743 14:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:14.743 14:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:14.743 14:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:10:14.743 14:46:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:17.276 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:17.276 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:10:17.276 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:17.276 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:17.276 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:17.276 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:17.276 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:17.276 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:10:17.276 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:17.276 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:10:17.276 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:10:17.276 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:10:17.276 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:10:17.276 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:10:17.276 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:10:17.276 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:17.276 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:17.276 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:17.276 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:17.276 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:17.276 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:17.276 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:17.276 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:17.276 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:17.276 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:17.276 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:17.276 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:17.276 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:17.276 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:17.276 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:17.276 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:17.276 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:17.276 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:17.276 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:17.276 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:17.277 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:17.277 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:17.277 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:17.277 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:17.277 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:17.277 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:17.277 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:17.277 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:17.277 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:17.277 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:17.277 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:17.277 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:17.277 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:17.277 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:17.277 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:17.277 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:17.277 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:17.277 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:17.277 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:17.277 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:17.277 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:17.277 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:17.277 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:17.277 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:17.277 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:17.277 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:17.277 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:17.277 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:17.277 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:17.277 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:17.277 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:17.277 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:17.277 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:17.277 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:17.277 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:17.277 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:17.277 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:17.277 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:17.277 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:10:17.277 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:17.277 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:17.277 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:17.277 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:17.277 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:17.277 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:17.277 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:17.277 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:17.277 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:17.277 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:17.277 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:17.277 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:17.277 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:17.277 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:17.277 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:17.277 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:17.277 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:17.277 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:17.277 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:17.277 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:17.277 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:17.277 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:17.277 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:17.277 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:17.277 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:17.277 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:17.277 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:17.277 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:10:17.277 00:10:17.277 --- 10.0.0.2 ping statistics --- 00:10:17.277 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:17.277 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:10:17.277 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:17.277 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:17.277 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:10:17.277 00:10:17.277 --- 10.0.0.1 ping statistics --- 00:10:17.277 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:17.277 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:10:17.277 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:17.277 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:10:17.277 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:17.277 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:17.277 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:17.277 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:17.277 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:17.277 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:17.277 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:17.277 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:10:17.277 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:17.277 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:17.277 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:17.277 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=614590 00:10:17.277 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:17.277 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 614590 00:10:17.277 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 614590 ']' 00:10:17.277 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:17.277 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:17.277 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:17.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:17.277 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:17.277 14:46:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:17.277 [2024-12-11 14:46:59.848413] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:10:17.277 [2024-12-11 14:46:59.848491] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:17.277 [2024-12-11 14:46:59.922145] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:17.277 [2024-12-11 14:46:59.980895] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:17.278 [2024-12-11 14:46:59.980975] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:17.278 [2024-12-11 14:46:59.980989] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:17.278 [2024-12-11 14:46:59.981015] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:17.278 [2024-12-11 14:46:59.981025] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:17.278 [2024-12-11 14:46:59.982743] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:10:17.278 [2024-12-11 14:46:59.982795] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:10:17.278 [2024-12-11 14:46:59.982864] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:10:17.278 [2024-12-11 14:46:59.982868] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:17.536 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:17.536 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:10:17.536 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:17.536 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:17.536 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:17.536 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:17.536 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:17.536 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.536 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:17.536 [2024-12-11 14:47:00.142301] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:17.536 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.536 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:10:17.536 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:17.536 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:10:17.536 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.536 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:17.536 Null1 00:10:17.536 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.536 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:17.536 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.536 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:17.536 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.536 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:10:17.536 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.536 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:17.536 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.536 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:17.536 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.536 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:17.536 [2024-12-11 14:47:00.194796] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:17.536 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.536 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:17.536 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:10:17.536 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.536 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:17.536 Null2 00:10:17.536 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.536 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:10:17.537 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.537 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:17.537 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.537 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:10:17.537 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.537 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:17.537 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.537 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:17.537 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.537 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:17.537 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.537 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:17.537 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:10:17.537 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.537 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:17.537 Null3 00:10:17.537 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.537 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:10:17.537 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.537 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:17.537 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.537 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:10:17.537 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.537 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:17.537 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.537 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:10:17.537 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.537 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:17.537 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.537 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:17.537 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:10:17.537 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.537 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:17.537 Null4 00:10:17.537 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.537 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:10:17.537 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.537 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:17.537 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.537 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:10:17.537 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.537 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:17.537 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.537 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:10:17.537 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.537 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:17.537 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.537 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:17.537 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.537 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:17.537 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.537 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:10:17.537 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.537 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:17.795 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.795 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:10:17.795 00:10:17.795 Discovery Log Number of Records 6, Generation counter 6 00:10:17.795 =====Discovery Log Entry 0====== 00:10:17.795 trtype: tcp 00:10:17.795 adrfam: ipv4 00:10:17.795 subtype: current discovery subsystem 00:10:17.795 treq: not required 00:10:17.795 portid: 0 00:10:17.795 trsvcid: 4420 00:10:17.795 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:17.795 traddr: 10.0.0.2 00:10:17.795 eflags: explicit discovery connections, duplicate discovery information 00:10:17.795 sectype: none 00:10:17.795 =====Discovery Log Entry 1====== 00:10:17.795 trtype: tcp 00:10:17.795 adrfam: ipv4 00:10:17.795 subtype: nvme subsystem 00:10:17.795 treq: not required 00:10:17.795 portid: 0 00:10:17.795 trsvcid: 4420 00:10:17.795 subnqn: nqn.2016-06.io.spdk:cnode1 00:10:17.795 traddr: 10.0.0.2 00:10:17.795 eflags: none 00:10:17.795 sectype: none 00:10:17.795 =====Discovery Log Entry 2====== 00:10:17.795 trtype: tcp 00:10:17.795 adrfam: ipv4 00:10:17.795 subtype: nvme subsystem 00:10:17.795 treq: not required 00:10:17.795 portid: 0 00:10:17.795 trsvcid: 4420 00:10:17.795 subnqn: nqn.2016-06.io.spdk:cnode2 00:10:17.795 traddr: 10.0.0.2 00:10:17.795 eflags: none 00:10:17.795 sectype: none 00:10:17.795 =====Discovery Log Entry 3====== 00:10:17.795 trtype: tcp 00:10:17.795 adrfam: ipv4 00:10:17.795 subtype: nvme subsystem 00:10:17.795 treq: not required 00:10:17.795 portid: 0 00:10:17.795 trsvcid: 4420 00:10:17.795 subnqn: nqn.2016-06.io.spdk:cnode3 00:10:17.795 traddr: 10.0.0.2 00:10:17.795 eflags: none 00:10:17.795 sectype: none 00:10:17.795 =====Discovery Log Entry 4====== 00:10:17.795 trtype: tcp 00:10:17.795 adrfam: ipv4 00:10:17.795 subtype: nvme subsystem 00:10:17.795 treq: not required 00:10:17.795 portid: 0 00:10:17.795 trsvcid: 4420 00:10:17.795 subnqn: nqn.2016-06.io.spdk:cnode4 00:10:17.795 traddr: 10.0.0.2 00:10:17.795 eflags: none 00:10:17.795 sectype: none 00:10:17.795 =====Discovery Log Entry 5====== 00:10:17.795 trtype: tcp 00:10:17.795 adrfam: ipv4 00:10:17.795 subtype: discovery subsystem referral 00:10:17.795 treq: not required 00:10:17.795 portid: 0 00:10:17.795 trsvcid: 4430 00:10:17.795 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:17.795 traddr: 10.0.0.2 00:10:17.795 eflags: none 00:10:17.795 sectype: none 00:10:17.795 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:10:17.795 Perform nvmf subsystem discovery via RPC 00:10:17.795 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:10:17.795 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.795 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:17.795 [ 00:10:17.795 { 00:10:17.795 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:17.796 "subtype": "Discovery", 00:10:17.796 "listen_addresses": [ 00:10:17.796 { 00:10:17.796 "trtype": "TCP", 00:10:17.796 "adrfam": "IPv4", 00:10:17.796 "traddr": "10.0.0.2", 00:10:17.796 "trsvcid": "4420" 00:10:17.796 } 00:10:17.796 ], 00:10:17.796 "allow_any_host": true, 00:10:17.796 "hosts": [] 00:10:17.796 }, 00:10:17.796 { 00:10:17.796 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:17.796 "subtype": "NVMe", 00:10:17.796 "listen_addresses": [ 00:10:17.796 { 00:10:17.796 "trtype": "TCP", 00:10:17.796 "adrfam": "IPv4", 00:10:17.796 "traddr": "10.0.0.2", 00:10:17.796 "trsvcid": "4420" 00:10:17.796 } 00:10:17.796 ], 00:10:17.796 "allow_any_host": true, 00:10:17.796 "hosts": [], 00:10:17.796 "serial_number": "SPDK00000000000001", 00:10:17.796 "model_number": "SPDK bdev Controller", 00:10:17.796 "max_namespaces": 32, 00:10:17.796 "min_cntlid": 1, 00:10:17.796 "max_cntlid": 65519, 00:10:17.796 "namespaces": [ 00:10:17.796 { 00:10:17.796 "nsid": 1, 00:10:17.796 "bdev_name": "Null1", 00:10:17.796 "name": "Null1", 00:10:17.796 "nguid": "9FB5ECF2F6C3483FA484825EFE28F4EC", 00:10:17.796 "uuid": "9fb5ecf2-f6c3-483f-a484-825efe28f4ec" 00:10:17.796 } 00:10:17.796 ] 00:10:17.796 }, 00:10:17.796 { 00:10:17.796 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:17.796 "subtype": "NVMe", 00:10:17.796 "listen_addresses": [ 00:10:17.796 { 00:10:17.796 "trtype": "TCP", 00:10:17.796 "adrfam": "IPv4", 00:10:17.796 "traddr": "10.0.0.2", 00:10:17.796 "trsvcid": "4420" 00:10:17.796 } 00:10:17.796 ], 00:10:17.796 "allow_any_host": true, 00:10:17.796 "hosts": [], 00:10:17.796 "serial_number": "SPDK00000000000002", 00:10:17.796 "model_number": "SPDK bdev Controller", 00:10:17.796 "max_namespaces": 32, 00:10:17.796 "min_cntlid": 1, 00:10:17.796 "max_cntlid": 65519, 00:10:17.796 "namespaces": [ 00:10:17.796 { 00:10:17.796 "nsid": 1, 00:10:17.796 "bdev_name": "Null2", 00:10:17.796 "name": "Null2", 00:10:17.796 "nguid": "534E4F635B754A098FB2E92A15A53652", 00:10:17.796 "uuid": "534e4f63-5b75-4a09-8fb2-e92a15a53652" 00:10:17.796 } 00:10:17.796 ] 00:10:17.796 }, 00:10:17.796 { 00:10:17.796 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:10:17.796 "subtype": "NVMe", 00:10:17.796 "listen_addresses": [ 00:10:17.796 { 00:10:17.796 "trtype": "TCP", 00:10:17.796 "adrfam": "IPv4", 00:10:17.796 "traddr": "10.0.0.2", 00:10:17.796 "trsvcid": "4420" 00:10:17.796 } 00:10:17.796 ], 00:10:17.796 "allow_any_host": true, 00:10:17.796 "hosts": [], 00:10:17.796 "serial_number": "SPDK00000000000003", 00:10:17.796 "model_number": "SPDK bdev Controller", 00:10:17.796 "max_namespaces": 32, 00:10:17.796 "min_cntlid": 1, 00:10:17.796 "max_cntlid": 65519, 00:10:17.796 "namespaces": [ 00:10:17.796 { 00:10:17.796 "nsid": 1, 00:10:17.796 "bdev_name": "Null3", 00:10:17.796 "name": "Null3", 00:10:17.796 "nguid": "67BBF95425C94CCF8030F98971C6F37F", 00:10:17.796 "uuid": "67bbf954-25c9-4ccf-8030-f98971c6f37f" 00:10:17.796 } 00:10:17.796 ] 00:10:17.796 }, 00:10:17.796 { 00:10:17.796 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:10:17.796 "subtype": "NVMe", 00:10:17.796 "listen_addresses": [ 00:10:17.796 { 00:10:17.796 "trtype": "TCP", 00:10:17.796 "adrfam": "IPv4", 00:10:17.796 "traddr": "10.0.0.2", 00:10:17.796 "trsvcid": "4420" 00:10:17.796 } 00:10:17.796 ], 00:10:17.796 "allow_any_host": true, 00:10:17.796 "hosts": [], 00:10:17.796 "serial_number": "SPDK00000000000004", 00:10:17.796 "model_number": "SPDK bdev Controller", 00:10:17.796 "max_namespaces": 32, 00:10:17.796 "min_cntlid": 1, 00:10:17.796 "max_cntlid": 65519, 00:10:17.796 "namespaces": [ 00:10:17.796 { 00:10:17.796 "nsid": 1, 00:10:17.796 "bdev_name": "Null4", 00:10:17.796 "name": "Null4", 00:10:17.796 "nguid": "A3D54414926B4C69ADD27F5350E193F9", 00:10:17.796 "uuid": "a3d54414-926b-4c69-add2-7f5350e193f9" 00:10:17.796 } 00:10:17.796 ] 00:10:17.796 } 00:10:17.796 ] 00:10:17.796 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.796 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:10:17.796 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:17.796 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:17.796 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.796 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:17.796 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.796 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:10:17.796 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.796 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:17.796 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.796 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:17.796 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:10:17.796 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.796 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:17.796 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.796 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:10:17.796 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.796 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:17.796 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.796 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:17.796 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:10:17.796 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.796 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:17.796 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.796 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:10:17.796 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.796 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:17.796 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.796 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:17.796 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:10:17.796 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.796 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:17.796 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.796 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:10:17.796 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.796 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:17.796 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.796 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:10:17.796 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.796 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:18.054 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.054 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:10:18.054 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:10:18.054 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.054 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:18.054 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.054 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:10:18.054 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:10:18.054 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:10:18.054 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:10:18.054 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:18.055 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:10:18.055 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:18.055 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:10:18.055 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:18.055 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:18.055 rmmod nvme_tcp 00:10:18.055 rmmod nvme_fabrics 00:10:18.055 rmmod nvme_keyring 00:10:18.055 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:18.055 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:10:18.055 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:10:18.055 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 614590 ']' 00:10:18.055 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 614590 00:10:18.055 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 614590 ']' 00:10:18.055 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 614590 00:10:18.055 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:10:18.055 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:18.055 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 614590 00:10:18.055 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:18.055 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:18.055 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 614590' 00:10:18.055 killing process with pid 614590 00:10:18.055 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 614590 00:10:18.055 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 614590 00:10:18.313 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:18.313 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:18.313 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:18.313 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:10:18.313 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:10:18.313 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:18.313 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:10:18.313 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:18.313 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:18.313 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:18.313 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:18.313 14:47:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:20.849 14:47:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:20.849 00:10:20.849 real 0m5.715s 00:10:20.849 user 0m4.833s 00:10:20.849 sys 0m1.988s 00:10:20.849 14:47:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:20.849 14:47:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:20.849 ************************************ 00:10:20.849 END TEST nvmf_target_discovery 00:10:20.849 ************************************ 00:10:20.849 14:47:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:20.849 14:47:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:20.849 14:47:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:20.849 14:47:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:20.849 ************************************ 00:10:20.849 START TEST nvmf_referrals 00:10:20.849 ************************************ 00:10:20.849 14:47:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:20.849 * Looking for test storage... 00:10:20.849 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:20.849 14:47:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:20.849 14:47:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lcov --version 00:10:20.849 14:47:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:20.849 14:47:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:20.849 14:47:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:20.849 14:47:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:20.849 14:47:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:20.849 14:47:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:10:20.849 14:47:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:10:20.849 14:47:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:10:20.849 14:47:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:10:20.849 14:47:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:10:20.849 14:47:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:10:20.849 14:47:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:10:20.849 14:47:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:20.849 14:47:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:10:20.849 14:47:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:10:20.849 14:47:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:20.849 14:47:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:20.849 14:47:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:10:20.849 14:47:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:10:20.849 14:47:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:20.849 14:47:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:10:20.849 14:47:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:10:20.849 14:47:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:10:20.849 14:47:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:10:20.849 14:47:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:20.849 14:47:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:10:20.849 14:47:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:10:20.849 14:47:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:20.849 14:47:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:20.849 14:47:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:10:20.849 14:47:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:20.849 14:47:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:20.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.849 --rc genhtml_branch_coverage=1 00:10:20.849 --rc genhtml_function_coverage=1 00:10:20.849 --rc genhtml_legend=1 00:10:20.849 --rc geninfo_all_blocks=1 00:10:20.849 --rc geninfo_unexecuted_blocks=1 00:10:20.849 00:10:20.849 ' 00:10:20.849 14:47:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:20.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.849 --rc genhtml_branch_coverage=1 00:10:20.849 --rc genhtml_function_coverage=1 00:10:20.849 --rc genhtml_legend=1 00:10:20.849 --rc geninfo_all_blocks=1 00:10:20.850 --rc geninfo_unexecuted_blocks=1 00:10:20.850 00:10:20.850 ' 00:10:20.850 14:47:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:20.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.850 --rc genhtml_branch_coverage=1 00:10:20.850 --rc genhtml_function_coverage=1 00:10:20.850 --rc genhtml_legend=1 00:10:20.850 --rc geninfo_all_blocks=1 00:10:20.850 --rc geninfo_unexecuted_blocks=1 00:10:20.850 00:10:20.850 ' 00:10:20.850 14:47:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:20.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.850 --rc genhtml_branch_coverage=1 00:10:20.850 --rc genhtml_function_coverage=1 00:10:20.850 --rc genhtml_legend=1 00:10:20.850 --rc geninfo_all_blocks=1 00:10:20.850 --rc geninfo_unexecuted_blocks=1 00:10:20.850 00:10:20.850 ' 00:10:20.850 14:47:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:20.850 14:47:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:10:20.850 14:47:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:20.850 14:47:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:20.850 14:47:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:20.850 14:47:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:20.850 14:47:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:20.850 14:47:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:20.850 14:47:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:20.850 14:47:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:20.850 14:47:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:20.850 14:47:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:20.850 14:47:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:20.850 14:47:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:20.850 14:47:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:20.850 14:47:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:20.850 14:47:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:20.850 14:47:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:20.850 14:47:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:20.850 14:47:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:10:20.850 14:47:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:20.850 14:47:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:20.850 14:47:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:20.850 14:47:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.850 14:47:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.850 14:47:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.850 14:47:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:10:20.850 14:47:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.850 14:47:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:10:20.850 14:47:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:20.850 14:47:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:20.850 14:47:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:20.850 14:47:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:20.850 14:47:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:20.850 14:47:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:20.850 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:20.850 14:47:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:20.850 14:47:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:20.850 14:47:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:20.850 14:47:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:10:20.850 14:47:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:10:20.850 14:47:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:10:20.850 14:47:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:10:20.850 14:47:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:10:20.850 14:47:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:10:20.850 14:47:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:10:20.850 14:47:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:20.850 14:47:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:20.850 14:47:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:20.850 14:47:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:20.850 14:47:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:20.850 14:47:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:20.850 14:47:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:20.850 14:47:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:20.850 14:47:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:20.850 14:47:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:20.850 14:47:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:10:20.850 14:47:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:22.754 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:22.754 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:10:22.754 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:22.754 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:22.754 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:22.754 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:22.754 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:22.754 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:10:22.754 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:22.754 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:10:22.754 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:10:22.754 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:10:22.754 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:10:22.754 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:10:22.754 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:10:22.754 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:22.754 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:22.754 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:22.754 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:22.754 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:22.754 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:22.754 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:22.754 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:22.754 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:22.754 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:22.754 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:22.754 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:22.754 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:22.754 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:22.754 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:22.754 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:22.754 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:22.754 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:22.754 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:22.754 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:22.754 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:22.754 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:22.754 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:22.754 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:22.754 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:22.754 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:22.754 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:22.754 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:22.754 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:22.754 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:22.754 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:22.754 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:22.754 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:22.754 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:22.754 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:22.754 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:22.754 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:22.754 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:22.754 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:22.754 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:22.754 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:22.754 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:22.754 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:22.754 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:22.754 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:22.754 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:22.754 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:22.754 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:22.754 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:22.754 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:22.754 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:22.754 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:22.754 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:22.754 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:22.754 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:22.754 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:22.754 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:22.754 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:22.754 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:10:22.754 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:22.754 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:22.754 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:22.754 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:22.754 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:22.754 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:22.754 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:22.754 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:22.754 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:22.754 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:22.754 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:22.754 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:22.754 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:22.754 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:22.754 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:22.754 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:22.754 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:22.754 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:22.754 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:22.754 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:22.754 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:22.754 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:22.754 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:22.754 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:22.755 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:22.755 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:22.755 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:22.755 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:10:22.755 00:10:22.755 --- 10.0.0.2 ping statistics --- 00:10:22.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:22.755 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:10:22.755 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:23.012 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:23.012 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:10:23.012 00:10:23.012 --- 10.0.0.1 ping statistics --- 00:10:23.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:23.012 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:10:23.012 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:23.012 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:10:23.013 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:23.013 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:23.013 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:23.013 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:23.013 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:23.013 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:23.013 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:23.013 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:10:23.013 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:23.013 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:23.013 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:23.013 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=616712 00:10:23.013 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:23.013 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 616712 00:10:23.013 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 616712 ']' 00:10:23.013 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:23.013 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:23.013 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:23.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:23.013 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:23.013 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:23.013 [2024-12-11 14:47:05.616780] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:10:23.013 [2024-12-11 14:47:05.616879] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:23.013 [2024-12-11 14:47:05.690002] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:23.013 [2024-12-11 14:47:05.748636] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:23.013 [2024-12-11 14:47:05.748689] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:23.013 [2024-12-11 14:47:05.748718] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:23.013 [2024-12-11 14:47:05.748730] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:23.013 [2024-12-11 14:47:05.748740] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:23.013 [2024-12-11 14:47:05.750276] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:10:23.013 [2024-12-11 14:47:05.750343] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:10:23.013 [2024-12-11 14:47:05.750408] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:10:23.013 [2024-12-11 14:47:05.750411] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:23.271 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:23.271 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:10:23.271 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:23.271 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:23.271 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:23.271 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:23.271 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:23.271 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.271 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:23.271 [2024-12-11 14:47:05.901978] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:23.271 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.271 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:10:23.271 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.271 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:23.271 [2024-12-11 14:47:05.930761] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:10:23.271 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.271 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:10:23.271 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.271 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:23.272 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.272 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:10:23.272 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.272 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:23.272 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.272 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:10:23.272 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.272 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:23.272 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.272 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:23.272 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:10:23.272 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.272 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:23.272 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.272 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:10:23.272 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:10:23.272 14:47:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:23.272 14:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:23.272 14:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.272 14:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:23.272 14:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:23.272 14:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:23.272 14:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.272 14:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:23.272 14:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:23.272 14:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:10:23.272 14:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:23.272 14:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:23.272 14:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:23.272 14:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:23.272 14:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:23.529 14:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:23.529 14:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:23.530 14:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:10:23.530 14:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.530 14:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:23.530 14:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.530 14:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:10:23.530 14:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.530 14:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:23.530 14:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.530 14:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:10:23.530 14:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.530 14:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:23.530 14:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.530 14:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:23.530 14:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:10:23.530 14:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.530 14:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:23.530 14:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.787 14:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:10:23.787 14:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:10:23.787 14:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:23.787 14:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:23.787 14:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:23.787 14:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:23.787 14:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:23.787 14:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:10:23.787 14:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:10:23.787 14:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:10:23.787 14:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.787 14:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:23.787 14:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.787 14:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:23.787 14:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.787 14:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:24.045 14:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.045 14:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:10:24.045 14:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:24.045 14:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:24.045 14:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.045 14:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:24.045 14:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:24.045 14:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:24.045 14:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.045 14:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:10:24.045 14:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:24.045 14:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:10:24.045 14:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:24.045 14:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:24.045 14:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:24.045 14:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:24.046 14:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:24.046 14:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:10:24.046 14:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:24.046 14:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:10:24.046 14:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:10:24.046 14:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:24.046 14:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:24.046 14:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:24.303 14:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:10:24.303 14:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:10:24.303 14:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:10:24.303 14:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:24.303 14:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:24.303 14:47:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:24.561 14:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:24.561 14:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:24.561 14:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.561 14:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:24.561 14:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.561 14:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:10:24.561 14:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:24.561 14:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:24.561 14:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:24.561 14:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.561 14:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:24.561 14:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:24.561 14:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.561 14:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:10:24.561 14:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:24.561 14:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:10:24.561 14:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:24.561 14:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:24.561 14:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:24.561 14:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:24.561 14:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:24.818 14:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:10:24.818 14:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:24.818 14:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:10:24.818 14:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:10:24.818 14:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:24.818 14:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:24.818 14:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:24.818 14:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:10:24.818 14:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:10:24.818 14:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:24.818 14:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:10:24.818 14:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:24.818 14:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:25.076 14:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:25.076 14:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:10:25.076 14:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.076 14:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:25.076 14:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.076 14:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:25.076 14:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:10:25.076 14:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.076 14:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:25.076 14:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.076 14:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:10:25.076 14:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:10:25.076 14:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:25.076 14:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:25.076 14:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:25.076 14:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:25.076 14:47:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:25.334 14:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:10:25.334 14:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:10:25.334 14:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:10:25.334 14:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:10:25.334 14:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:25.334 14:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:10:25.334 14:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:25.334 14:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:10:25.334 14:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:25.334 14:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:25.334 rmmod nvme_tcp 00:10:25.334 rmmod nvme_fabrics 00:10:25.334 rmmod nvme_keyring 00:10:25.334 14:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:25.334 14:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:10:25.334 14:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:10:25.334 14:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 616712 ']' 00:10:25.334 14:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 616712 00:10:25.334 14:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 616712 ']' 00:10:25.334 14:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 616712 00:10:25.334 14:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:10:25.334 14:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:25.334 14:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 616712 00:10:25.593 14:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:25.593 14:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:25.593 14:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 616712' 00:10:25.593 killing process with pid 616712 00:10:25.593 14:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 616712 00:10:25.593 14:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 616712 00:10:25.593 14:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:25.593 14:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:25.593 14:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:25.593 14:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:10:25.593 14:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:10:25.593 14:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:25.593 14:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:10:25.593 14:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:25.593 14:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:25.593 14:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:25.593 14:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:25.593 14:47:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:28.190 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:28.190 00:10:28.190 real 0m7.327s 00:10:28.190 user 0m11.744s 00:10:28.190 sys 0m2.386s 00:10:28.190 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:28.190 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:28.190 ************************************ 00:10:28.190 END TEST nvmf_referrals 00:10:28.190 ************************************ 00:10:28.190 14:47:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:10:28.190 14:47:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:28.190 14:47:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:28.190 14:47:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:28.190 ************************************ 00:10:28.190 START TEST nvmf_connect_disconnect 00:10:28.190 ************************************ 00:10:28.190 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:10:28.190 * Looking for test storage... 00:10:28.190 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:28.190 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:28.190 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:10:28.190 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:28.190 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:28.190 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:28.190 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:28.190 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:28.190 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:10:28.190 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:10:28.190 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:10:28.190 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:10:28.190 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:10:28.190 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:10:28.190 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:10:28.190 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:28.190 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:10:28.190 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:10:28.190 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:28.190 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:28.190 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:10:28.190 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:10:28.190 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:28.190 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:10:28.190 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:10:28.190 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:10:28.190 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:10:28.190 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:28.190 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:10:28.190 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:10:28.191 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:28.191 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:28.191 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:10:28.191 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:28.191 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:28.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.191 --rc genhtml_branch_coverage=1 00:10:28.191 --rc genhtml_function_coverage=1 00:10:28.191 --rc genhtml_legend=1 00:10:28.191 --rc geninfo_all_blocks=1 00:10:28.191 --rc geninfo_unexecuted_blocks=1 00:10:28.191 00:10:28.191 ' 00:10:28.191 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:28.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.191 --rc genhtml_branch_coverage=1 00:10:28.191 --rc genhtml_function_coverage=1 00:10:28.191 --rc genhtml_legend=1 00:10:28.191 --rc geninfo_all_blocks=1 00:10:28.191 --rc geninfo_unexecuted_blocks=1 00:10:28.191 00:10:28.191 ' 00:10:28.191 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:28.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.191 --rc genhtml_branch_coverage=1 00:10:28.191 --rc genhtml_function_coverage=1 00:10:28.191 --rc genhtml_legend=1 00:10:28.191 --rc geninfo_all_blocks=1 00:10:28.191 --rc geninfo_unexecuted_blocks=1 00:10:28.191 00:10:28.191 ' 00:10:28.191 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:28.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.191 --rc genhtml_branch_coverage=1 00:10:28.191 --rc genhtml_function_coverage=1 00:10:28.191 --rc genhtml_legend=1 00:10:28.191 --rc geninfo_all_blocks=1 00:10:28.191 --rc geninfo_unexecuted_blocks=1 00:10:28.191 00:10:28.191 ' 00:10:28.191 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:28.191 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:10:28.191 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:28.191 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:28.191 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:28.191 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:28.191 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:28.191 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:28.191 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:28.191 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:28.191 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:28.191 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:28.191 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:28.191 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:28.191 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:28.191 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:28.191 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:28.191 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:28.191 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:28.191 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:10:28.191 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:28.191 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:28.191 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:28.191 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.191 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.191 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.191 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:10:28.191 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.191 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:10:28.191 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:28.191 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:28.191 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:28.191 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:28.191 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:28.191 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:28.191 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:28.191 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:28.191 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:28.191 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:28.191 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:28.191 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:28.191 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:10:28.191 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:28.191 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:28.191 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:28.191 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:28.191 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:28.191 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:28.191 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:28.191 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:28.191 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:28.191 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:28.191 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:10:28.191 14:47:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:30.103 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:30.103 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:10:30.103 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:30.103 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:30.103 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:30.103 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:30.103 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:30.103 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:10:30.103 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:30.103 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:10:30.103 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:10:30.103 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:10:30.103 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:10:30.103 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:10:30.103 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:10:30.103 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:30.103 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:30.103 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:30.103 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:30.103 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:30.103 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:30.103 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:30.103 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:30.103 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:30.103 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:30.103 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:30.103 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:30.103 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:30.103 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:30.103 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:30.103 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:30.103 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:30.103 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:30.103 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:30.103 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:30.103 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:30.103 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:30.103 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:30.103 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:30.103 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:30.103 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:30.103 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:30.103 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:30.103 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:30.103 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:30.103 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:30.103 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:30.103 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:30.103 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:30.103 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:30.103 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:30.103 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:30.103 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:30.103 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:30.103 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:30.103 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:30.103 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:30.103 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:30.103 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:30.103 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:30.103 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:30.103 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:30.103 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:30.103 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:30.103 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:30.103 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:30.103 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:30.103 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:30.103 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:30.103 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:30.103 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:30.103 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:30.103 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:30.103 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:10:30.103 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:30.103 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:30.103 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:30.103 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:30.103 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:30.103 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:30.103 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:30.103 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:30.103 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:30.103 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:30.103 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:30.103 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:30.103 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:30.103 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:30.103 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:30.103 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:30.103 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:30.103 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:30.362 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:30.362 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:30.362 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:30.362 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:30.362 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:30.362 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:30.362 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:30.362 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:30.362 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:30.362 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.228 ms 00:10:30.362 00:10:30.362 --- 10.0.0.2 ping statistics --- 00:10:30.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:30.362 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:10:30.362 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:30.362 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:30.362 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:10:30.362 00:10:30.362 --- 10.0.0.1 ping statistics --- 00:10:30.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:30.362 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:10:30.362 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:30.362 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:10:30.362 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:30.362 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:30.362 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:30.362 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:30.362 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:30.362 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:30.362 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:30.362 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:10:30.362 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:30.362 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:30.362 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:30.362 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=619026 00:10:30.362 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:30.362 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 619026 00:10:30.362 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 619026 ']' 00:10:30.362 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:30.362 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:30.362 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:30.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:30.362 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:30.362 14:47:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:30.362 [2024-12-11 14:47:13.027265] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:10:30.362 [2024-12-11 14:47:13.027344] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:30.362 [2024-12-11 14:47:13.097096] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:30.621 [2024-12-11 14:47:13.153280] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:30.621 [2024-12-11 14:47:13.153330] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:30.621 [2024-12-11 14:47:13.153343] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:30.621 [2024-12-11 14:47:13.153360] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:30.621 [2024-12-11 14:47:13.153369] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:30.621 [2024-12-11 14:47:13.155076] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:10:30.621 [2024-12-11 14:47:13.155151] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:10:30.621 [2024-12-11 14:47:13.155260] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:10:30.621 [2024-12-11 14:47:13.155263] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:30.621 14:47:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:30.621 14:47:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:10:30.621 14:47:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:30.621 14:47:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:30.621 14:47:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:30.621 14:47:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:30.621 14:47:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:30.621 14:47:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.621 14:47:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:30.621 [2024-12-11 14:47:13.340428] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:30.621 14:47:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.621 14:47:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:10:30.621 14:47:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.621 14:47:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:30.621 14:47:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.621 14:47:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:10:30.621 14:47:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:30.621 14:47:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.621 14:47:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:30.621 14:47:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.621 14:47:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:30.621 14:47:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.621 14:47:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:30.878 14:47:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.878 14:47:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:30.878 14:47:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.878 14:47:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:30.878 [2024-12-11 14:47:13.402113] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:30.878 14:47:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.878 14:47:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:10:30.878 14:47:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:10:30.878 14:47:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:10:33.403 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:36.681 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:39.205 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:41.733 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:45.015 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:45.015 14:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:10:45.015 14:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:10:45.015 14:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:45.015 14:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:10:45.015 14:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:45.015 14:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:10:45.015 14:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:45.015 14:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:45.015 rmmod nvme_tcp 00:10:45.015 rmmod nvme_fabrics 00:10:45.015 rmmod nvme_keyring 00:10:45.015 14:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:45.015 14:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:10:45.015 14:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:10:45.015 14:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 619026 ']' 00:10:45.015 14:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 619026 00:10:45.015 14:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 619026 ']' 00:10:45.015 14:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 619026 00:10:45.015 14:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:10:45.015 14:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:45.015 14:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 619026 00:10:45.015 14:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:45.015 14:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:45.015 14:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 619026' 00:10:45.015 killing process with pid 619026 00:10:45.015 14:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 619026 00:10:45.015 14:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 619026 00:10:45.015 14:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:45.015 14:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:45.015 14:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:45.015 14:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:10:45.015 14:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:10:45.015 14:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:45.015 14:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:10:45.015 14:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:45.015 14:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:45.015 14:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:45.015 14:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:45.015 14:47:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:46.921 14:47:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:46.921 00:10:46.921 real 0m19.164s 00:10:46.921 user 0m57.339s 00:10:46.921 sys 0m3.421s 00:10:46.921 14:47:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:46.921 14:47:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:46.921 ************************************ 00:10:46.921 END TEST nvmf_connect_disconnect 00:10:46.921 ************************************ 00:10:46.921 14:47:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:10:46.921 14:47:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:46.921 14:47:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:46.921 14:47:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:46.921 ************************************ 00:10:46.921 START TEST nvmf_multitarget 00:10:46.921 ************************************ 00:10:46.921 14:47:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:10:46.921 * Looking for test storage... 00:10:46.921 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:46.921 14:47:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:46.921 14:47:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lcov --version 00:10:46.921 14:47:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:47.181 14:47:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:47.181 14:47:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:47.181 14:47:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:47.181 14:47:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:47.181 14:47:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:10:47.181 14:47:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:10:47.181 14:47:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:10:47.181 14:47:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:10:47.181 14:47:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:10:47.181 14:47:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:10:47.181 14:47:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:10:47.181 14:47:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:47.181 14:47:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:10:47.182 14:47:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:10:47.182 14:47:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:47.182 14:47:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:47.182 14:47:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:10:47.182 14:47:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:10:47.182 14:47:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:47.182 14:47:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:10:47.182 14:47:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:10:47.182 14:47:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:10:47.182 14:47:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:10:47.182 14:47:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:47.182 14:47:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:10:47.182 14:47:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:10:47.182 14:47:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:47.182 14:47:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:47.182 14:47:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:10:47.182 14:47:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:47.182 14:47:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:47.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.182 --rc genhtml_branch_coverage=1 00:10:47.182 --rc genhtml_function_coverage=1 00:10:47.182 --rc genhtml_legend=1 00:10:47.182 --rc geninfo_all_blocks=1 00:10:47.182 --rc geninfo_unexecuted_blocks=1 00:10:47.182 00:10:47.182 ' 00:10:47.182 14:47:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:47.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.182 --rc genhtml_branch_coverage=1 00:10:47.182 --rc genhtml_function_coverage=1 00:10:47.182 --rc genhtml_legend=1 00:10:47.182 --rc geninfo_all_blocks=1 00:10:47.182 --rc geninfo_unexecuted_blocks=1 00:10:47.182 00:10:47.182 ' 00:10:47.182 14:47:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:47.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.182 --rc genhtml_branch_coverage=1 00:10:47.182 --rc genhtml_function_coverage=1 00:10:47.182 --rc genhtml_legend=1 00:10:47.182 --rc geninfo_all_blocks=1 00:10:47.182 --rc geninfo_unexecuted_blocks=1 00:10:47.182 00:10:47.182 ' 00:10:47.182 14:47:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:47.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.182 --rc genhtml_branch_coverage=1 00:10:47.182 --rc genhtml_function_coverage=1 00:10:47.182 --rc genhtml_legend=1 00:10:47.182 --rc geninfo_all_blocks=1 00:10:47.182 --rc geninfo_unexecuted_blocks=1 00:10:47.182 00:10:47.182 ' 00:10:47.182 14:47:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:47.182 14:47:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:10:47.182 14:47:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:47.182 14:47:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:47.182 14:47:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:47.182 14:47:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:47.182 14:47:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:47.182 14:47:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:47.182 14:47:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:47.182 14:47:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:47.182 14:47:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:47.182 14:47:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:47.182 14:47:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:47.182 14:47:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:47.182 14:47:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:47.182 14:47:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:47.182 14:47:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:47.182 14:47:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:47.182 14:47:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:47.182 14:47:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:10:47.182 14:47:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:47.182 14:47:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:47.182 14:47:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:47.182 14:47:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.182 14:47:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.182 14:47:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.182 14:47:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:10:47.182 14:47:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.182 14:47:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:10:47.182 14:47:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:47.182 14:47:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:47.182 14:47:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:47.182 14:47:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:47.182 14:47:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:47.182 14:47:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:47.182 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:47.182 14:47:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:47.182 14:47:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:47.182 14:47:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:47.182 14:47:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:10:47.182 14:47:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:10:47.182 14:47:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:47.182 14:47:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:47.182 14:47:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:47.182 14:47:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:47.182 14:47:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:47.182 14:47:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:47.182 14:47:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:47.183 14:47:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:47.183 14:47:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:47.183 14:47:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:47.183 14:47:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:10:47.183 14:47:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:49.719 14:47:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:49.719 14:47:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:10:49.719 14:47:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:49.719 14:47:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:49.719 14:47:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:49.719 14:47:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:49.720 14:47:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:49.720 14:47:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:10:49.720 14:47:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:49.720 14:47:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:10:49.720 14:47:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:10:49.720 14:47:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:10:49.720 14:47:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:10:49.720 14:47:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:10:49.720 14:47:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:10:49.720 14:47:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:49.720 14:47:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:49.720 14:47:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:49.720 14:47:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:49.720 14:47:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:49.720 14:47:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:49.720 14:47:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:49.720 14:47:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:49.720 14:47:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:49.720 14:47:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:49.720 14:47:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:49.720 14:47:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:49.720 14:47:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:49.720 14:47:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:49.720 14:47:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:49.720 14:47:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:49.720 14:47:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:49.720 14:47:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:49.720 14:47:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:49.720 14:47:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:49.720 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:49.720 14:47:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:49.720 14:47:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:49.720 14:47:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:49.720 14:47:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:49.720 14:47:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:49.720 14:47:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:49.720 14:47:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:49.720 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:49.720 14:47:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:49.720 14:47:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:49.720 14:47:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:49.720 14:47:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:49.720 14:47:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:49.720 14:47:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:49.720 14:47:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:49.720 14:47:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:49.720 14:47:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:49.720 14:47:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:49.720 14:47:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:49.720 14:47:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:49.720 14:47:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:49.720 14:47:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:49.720 14:47:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:49.720 14:47:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:49.720 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:49.720 14:47:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:49.720 14:47:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:49.720 14:47:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:49.720 14:47:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:49.720 14:47:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:49.720 14:47:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:49.720 14:47:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:49.720 14:47:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:49.720 14:47:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:49.720 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:49.720 14:47:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:49.720 14:47:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:49.720 14:47:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:10:49.720 14:47:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:49.720 14:47:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:49.720 14:47:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:49.720 14:47:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:49.720 14:47:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:49.720 14:47:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:49.720 14:47:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:49.720 14:47:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:49.720 14:47:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:49.720 14:47:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:49.720 14:47:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:49.720 14:47:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:49.720 14:47:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:49.720 14:47:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:49.720 14:47:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:49.720 14:47:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:49.720 14:47:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:49.720 14:47:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:49.720 14:47:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:49.720 14:47:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:49.720 14:47:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:49.720 14:47:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:49.720 14:47:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:49.720 14:47:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:49.720 14:47:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:49.720 14:47:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:49.720 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:49.720 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:10:49.720 00:10:49.720 --- 10.0.0.2 ping statistics --- 00:10:49.720 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:49.720 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:10:49.720 14:47:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:49.720 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:49.720 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.077 ms 00:10:49.720 00:10:49.720 --- 10.0.0.1 ping statistics --- 00:10:49.720 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:49.720 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:10:49.720 14:47:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:49.720 14:47:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:10:49.720 14:47:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:49.720 14:47:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:49.720 14:47:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:49.720 14:47:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:49.720 14:47:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:49.720 14:47:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:49.720 14:47:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:49.721 14:47:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:10:49.721 14:47:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:49.721 14:47:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:49.721 14:47:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:49.721 14:47:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=622793 00:10:49.721 14:47:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:49.721 14:47:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 622793 00:10:49.721 14:47:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 622793 ']' 00:10:49.721 14:47:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:49.721 14:47:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:49.721 14:47:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:49.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:49.721 14:47:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:49.721 14:47:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:49.721 [2024-12-11 14:47:32.186390] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:10:49.721 [2024-12-11 14:47:32.186477] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:49.721 [2024-12-11 14:47:32.260220] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:49.721 [2024-12-11 14:47:32.318540] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:49.721 [2024-12-11 14:47:32.318599] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:49.721 [2024-12-11 14:47:32.318628] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:49.721 [2024-12-11 14:47:32.318640] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:49.721 [2024-12-11 14:47:32.318649] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:49.721 [2024-12-11 14:47:32.320241] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:10:49.721 [2024-12-11 14:47:32.320338] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:10:49.721 [2024-12-11 14:47:32.320302] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:10:49.721 [2024-12-11 14:47:32.320342] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.721 14:47:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:49.721 14:47:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:10:49.721 14:47:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:49.721 14:47:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:49.721 14:47:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:49.979 14:47:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:49.979 14:47:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:10:49.979 14:47:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:49.979 14:47:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:10:49.979 14:47:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:10:49.979 14:47:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:10:49.979 "nvmf_tgt_1" 00:10:49.979 14:47:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:10:50.237 "nvmf_tgt_2" 00:10:50.237 14:47:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:50.237 14:47:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:10:50.237 14:47:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:10:50.237 14:47:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:10:50.494 true 00:10:50.494 14:47:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:10:50.494 true 00:10:50.494 14:47:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:50.494 14:47:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:10:50.753 14:47:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:10:50.753 14:47:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:50.753 14:47:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:10:50.753 14:47:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:50.753 14:47:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:10:50.753 14:47:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:50.753 14:47:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:10:50.753 14:47:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:50.753 14:47:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:50.753 rmmod nvme_tcp 00:10:50.753 rmmod nvme_fabrics 00:10:50.753 rmmod nvme_keyring 00:10:50.753 14:47:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:50.753 14:47:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:10:50.753 14:47:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:10:50.753 14:47:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 622793 ']' 00:10:50.753 14:47:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 622793 00:10:50.753 14:47:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 622793 ']' 00:10:50.753 14:47:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 622793 00:10:50.753 14:47:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:10:50.753 14:47:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:50.753 14:47:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 622793 00:10:50.753 14:47:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:50.753 14:47:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:50.753 14:47:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 622793' 00:10:50.753 killing process with pid 622793 00:10:50.753 14:47:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 622793 00:10:50.753 14:47:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 622793 00:10:51.013 14:47:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:51.013 14:47:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:51.013 14:47:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:51.013 14:47:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:10:51.013 14:47:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:10:51.013 14:47:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:51.013 14:47:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:10:51.013 14:47:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:51.013 14:47:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:51.013 14:47:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:51.013 14:47:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:51.013 14:47:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:53.552 14:47:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:53.552 00:10:53.552 real 0m6.070s 00:10:53.552 user 0m7.075s 00:10:53.552 sys 0m2.109s 00:10:53.552 14:47:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:53.552 14:47:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:53.552 ************************************ 00:10:53.552 END TEST nvmf_multitarget 00:10:53.552 ************************************ 00:10:53.552 14:47:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:10:53.552 14:47:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:53.552 14:47:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:53.552 14:47:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:53.552 ************************************ 00:10:53.552 START TEST nvmf_rpc 00:10:53.552 ************************************ 00:10:53.552 14:47:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:10:53.552 * Looking for test storage... 00:10:53.552 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:53.552 14:47:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:53.552 14:47:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:10:53.552 14:47:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:53.552 14:47:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:53.552 14:47:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:53.552 14:47:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:53.552 14:47:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:53.552 14:47:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:10:53.552 14:47:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:10:53.552 14:47:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:10:53.552 14:47:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:10:53.552 14:47:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:10:53.552 14:47:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:10:53.552 14:47:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:10:53.552 14:47:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:53.552 14:47:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:10:53.552 14:47:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:10:53.552 14:47:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:53.552 14:47:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:53.552 14:47:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:10:53.552 14:47:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:10:53.552 14:47:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:53.552 14:47:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:10:53.552 14:47:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:53.552 14:47:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:10:53.552 14:47:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:10:53.552 14:47:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:53.552 14:47:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:10:53.552 14:47:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:53.552 14:47:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:53.552 14:47:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:53.552 14:47:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:10:53.552 14:47:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:53.552 14:47:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:53.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:53.552 --rc genhtml_branch_coverage=1 00:10:53.552 --rc genhtml_function_coverage=1 00:10:53.552 --rc genhtml_legend=1 00:10:53.552 --rc geninfo_all_blocks=1 00:10:53.552 --rc geninfo_unexecuted_blocks=1 00:10:53.552 00:10:53.552 ' 00:10:53.552 14:47:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:53.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:53.552 --rc genhtml_branch_coverage=1 00:10:53.552 --rc genhtml_function_coverage=1 00:10:53.552 --rc genhtml_legend=1 00:10:53.552 --rc geninfo_all_blocks=1 00:10:53.552 --rc geninfo_unexecuted_blocks=1 00:10:53.552 00:10:53.552 ' 00:10:53.552 14:47:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:53.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:53.552 --rc genhtml_branch_coverage=1 00:10:53.552 --rc genhtml_function_coverage=1 00:10:53.552 --rc genhtml_legend=1 00:10:53.552 --rc geninfo_all_blocks=1 00:10:53.552 --rc geninfo_unexecuted_blocks=1 00:10:53.552 00:10:53.552 ' 00:10:53.552 14:47:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:53.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:53.552 --rc genhtml_branch_coverage=1 00:10:53.552 --rc genhtml_function_coverage=1 00:10:53.552 --rc genhtml_legend=1 00:10:53.552 --rc geninfo_all_blocks=1 00:10:53.552 --rc geninfo_unexecuted_blocks=1 00:10:53.552 00:10:53.552 ' 00:10:53.552 14:47:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:53.552 14:47:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:10:53.552 14:47:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:53.552 14:47:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:53.552 14:47:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:53.552 14:47:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:53.552 14:47:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:53.552 14:47:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:53.552 14:47:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:53.552 14:47:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:53.552 14:47:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:53.552 14:47:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:53.552 14:47:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:53.552 14:47:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:53.552 14:47:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:53.552 14:47:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:53.552 14:47:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:53.552 14:47:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:53.552 14:47:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:53.552 14:47:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:10:53.552 14:47:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:53.552 14:47:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:53.552 14:47:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:53.552 14:47:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.553 14:47:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.553 14:47:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.553 14:47:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:10:53.553 14:47:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.553 14:47:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:10:53.553 14:47:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:53.553 14:47:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:53.553 14:47:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:53.553 14:47:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:53.553 14:47:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:53.553 14:47:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:53.553 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:53.553 14:47:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:53.553 14:47:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:53.553 14:47:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:53.553 14:47:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:10:53.553 14:47:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:10:53.553 14:47:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:53.553 14:47:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:53.553 14:47:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:53.553 14:47:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:53.553 14:47:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:53.553 14:47:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:53.553 14:47:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:53.553 14:47:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:53.553 14:47:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:53.553 14:47:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:53.553 14:47:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:10:53.553 14:47:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:55.456 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:55.456 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:10:55.456 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:55.456 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:55.456 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:55.456 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:55.456 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:55.456 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:10:55.456 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:55.456 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:10:55.456 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:10:55.456 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:10:55.456 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:10:55.456 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:10:55.456 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:10:55.456 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:55.456 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:55.456 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:55.456 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:55.456 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:55.456 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:55.456 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:55.456 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:55.456 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:55.456 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:55.456 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:55.456 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:55.456 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:55.456 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:55.456 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:55.456 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:55.456 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:55.456 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:55.456 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:55.456 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:55.456 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:55.456 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:55.456 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:55.456 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:55.456 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:55.456 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:55.456 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:55.456 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:55.456 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:55.456 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:55.457 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:55.457 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:55.457 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:55.457 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:55.457 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:55.457 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:55.457 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:55.457 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:55.457 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:55.457 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:55.457 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:55.457 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:55.457 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:55.457 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:55.457 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:55.457 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:55.457 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:55.457 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:55.457 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:55.457 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:55.457 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:55.457 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:55.457 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:55.457 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:55.457 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:55.457 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:55.457 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:55.457 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:55.457 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:10:55.457 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:55.457 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:55.457 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:55.457 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:55.457 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:55.457 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:55.457 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:55.457 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:55.457 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:55.457 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:55.457 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:55.457 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:55.457 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:55.457 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:55.457 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:55.457 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:55.457 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:55.457 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:55.457 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:55.457 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:55.457 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:55.457 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:55.457 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:55.457 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:55.457 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:55.457 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:55.457 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:55.457 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.255 ms 00:10:55.457 00:10:55.457 --- 10.0.0.2 ping statistics --- 00:10:55.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:55.457 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:10:55.457 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:55.457 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:55.457 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.074 ms 00:10:55.457 00:10:55.457 --- 10.0.0.1 ping statistics --- 00:10:55.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:55.457 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:10:55.457 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:55.457 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:10:55.457 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:55.457 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:55.457 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:55.457 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:55.457 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:55.457 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:55.457 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:55.715 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:10:55.715 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:55.715 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:55.715 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:55.715 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=624899 00:10:55.715 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:55.715 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 624899 00:10:55.715 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 624899 ']' 00:10:55.715 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:55.715 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:55.715 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:55.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:55.715 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:55.715 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:55.715 [2024-12-11 14:47:38.285928] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:10:55.715 [2024-12-11 14:47:38.286016] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:55.715 [2024-12-11 14:47:38.357415] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:55.715 [2024-12-11 14:47:38.411833] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:55.715 [2024-12-11 14:47:38.411893] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:55.715 [2024-12-11 14:47:38.411921] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:55.715 [2024-12-11 14:47:38.411932] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:55.716 [2024-12-11 14:47:38.411942] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:55.716 [2024-12-11 14:47:38.413506] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:10:55.716 [2024-12-11 14:47:38.413630] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:10:55.716 [2024-12-11 14:47:38.413658] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:10:55.716 [2024-12-11 14:47:38.413661] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:55.974 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:55.974 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:55.974 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:55.974 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:55.974 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:55.974 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:55.974 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:10:55.974 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.974 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:55.974 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.974 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:10:55.974 "tick_rate": 2700000000, 00:10:55.974 "poll_groups": [ 00:10:55.974 { 00:10:55.974 "name": "nvmf_tgt_poll_group_000", 00:10:55.974 "admin_qpairs": 0, 00:10:55.974 "io_qpairs": 0, 00:10:55.974 "current_admin_qpairs": 0, 00:10:55.974 "current_io_qpairs": 0, 00:10:55.974 "pending_bdev_io": 0, 00:10:55.974 "completed_nvme_io": 0, 00:10:55.974 "transports": [] 00:10:55.974 }, 00:10:55.974 { 00:10:55.974 "name": "nvmf_tgt_poll_group_001", 00:10:55.974 "admin_qpairs": 0, 00:10:55.974 "io_qpairs": 0, 00:10:55.974 "current_admin_qpairs": 0, 00:10:55.974 "current_io_qpairs": 0, 00:10:55.974 "pending_bdev_io": 0, 00:10:55.974 "completed_nvme_io": 0, 00:10:55.974 "transports": [] 00:10:55.974 }, 00:10:55.974 { 00:10:55.974 "name": "nvmf_tgt_poll_group_002", 00:10:55.974 "admin_qpairs": 0, 00:10:55.974 "io_qpairs": 0, 00:10:55.974 "current_admin_qpairs": 0, 00:10:55.974 "current_io_qpairs": 0, 00:10:55.974 "pending_bdev_io": 0, 00:10:55.974 "completed_nvme_io": 0, 00:10:55.974 "transports": [] 00:10:55.974 }, 00:10:55.974 { 00:10:55.974 "name": "nvmf_tgt_poll_group_003", 00:10:55.974 "admin_qpairs": 0, 00:10:55.974 "io_qpairs": 0, 00:10:55.974 "current_admin_qpairs": 0, 00:10:55.974 "current_io_qpairs": 0, 00:10:55.974 "pending_bdev_io": 0, 00:10:55.974 "completed_nvme_io": 0, 00:10:55.974 "transports": [] 00:10:55.974 } 00:10:55.974 ] 00:10:55.974 }' 00:10:55.974 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:10:55.974 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:10:55.974 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:10:55.974 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:10:55.974 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:10:55.974 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:10:55.974 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:10:55.974 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:55.974 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.974 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:55.974 [2024-12-11 14:47:38.663315] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:55.974 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.974 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:10:55.974 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.974 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:55.974 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.974 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:10:55.974 "tick_rate": 2700000000, 00:10:55.974 "poll_groups": [ 00:10:55.974 { 00:10:55.974 "name": "nvmf_tgt_poll_group_000", 00:10:55.974 "admin_qpairs": 0, 00:10:55.974 "io_qpairs": 0, 00:10:55.974 "current_admin_qpairs": 0, 00:10:55.974 "current_io_qpairs": 0, 00:10:55.974 "pending_bdev_io": 0, 00:10:55.974 "completed_nvme_io": 0, 00:10:55.974 "transports": [ 00:10:55.974 { 00:10:55.974 "trtype": "TCP" 00:10:55.974 } 00:10:55.974 ] 00:10:55.974 }, 00:10:55.974 { 00:10:55.974 "name": "nvmf_tgt_poll_group_001", 00:10:55.974 "admin_qpairs": 0, 00:10:55.974 "io_qpairs": 0, 00:10:55.974 "current_admin_qpairs": 0, 00:10:55.974 "current_io_qpairs": 0, 00:10:55.974 "pending_bdev_io": 0, 00:10:55.974 "completed_nvme_io": 0, 00:10:55.974 "transports": [ 00:10:55.974 { 00:10:55.974 "trtype": "TCP" 00:10:55.974 } 00:10:55.974 ] 00:10:55.974 }, 00:10:55.974 { 00:10:55.974 "name": "nvmf_tgt_poll_group_002", 00:10:55.974 "admin_qpairs": 0, 00:10:55.974 "io_qpairs": 0, 00:10:55.974 "current_admin_qpairs": 0, 00:10:55.974 "current_io_qpairs": 0, 00:10:55.974 "pending_bdev_io": 0, 00:10:55.975 "completed_nvme_io": 0, 00:10:55.975 "transports": [ 00:10:55.975 { 00:10:55.975 "trtype": "TCP" 00:10:55.975 } 00:10:55.975 ] 00:10:55.975 }, 00:10:55.975 { 00:10:55.975 "name": "nvmf_tgt_poll_group_003", 00:10:55.975 "admin_qpairs": 0, 00:10:55.975 "io_qpairs": 0, 00:10:55.975 "current_admin_qpairs": 0, 00:10:55.975 "current_io_qpairs": 0, 00:10:55.975 "pending_bdev_io": 0, 00:10:55.975 "completed_nvme_io": 0, 00:10:55.975 "transports": [ 00:10:55.975 { 00:10:55.975 "trtype": "TCP" 00:10:55.975 } 00:10:55.975 ] 00:10:55.975 } 00:10:55.975 ] 00:10:55.975 }' 00:10:55.975 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:10:55.975 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:10:55.975 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:10:55.975 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:55.975 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:10:55.975 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:10:55.975 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:10:55.975 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:10:55.975 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:56.233 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:10:56.233 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:10:56.233 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:10:56.233 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:10:56.233 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:10:56.233 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.233 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:56.233 Malloc1 00:10:56.233 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.233 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:56.233 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.233 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:56.233 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.233 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:56.233 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.233 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:56.233 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.233 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:10:56.233 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.233 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:56.233 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.233 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:56.233 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.233 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:56.233 [2024-12-11 14:47:38.826480] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:56.233 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.233 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:10:56.233 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:10:56.233 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:10:56.233 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:10:56.233 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:56.233 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:10:56.233 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:56.233 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:10:56.233 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:56.233 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:10:56.233 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:10:56.233 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:10:56.233 [2024-12-11 14:47:38.850705] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:10:56.233 Failed to write to /dev/nvme-fabrics: Input/output error 00:10:56.233 could not add new controller: failed to write to nvme-fabrics device 00:10:56.233 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:10:56.233 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:56.233 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:56.233 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:56.233 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:56.233 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.233 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:56.233 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.234 14:47:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:56.800 14:47:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:10:56.800 14:47:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:10:56.800 14:47:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:56.800 14:47:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:56.800 14:47:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:10:59.327 14:47:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:59.327 14:47:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:59.327 14:47:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:59.327 14:47:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:59.327 14:47:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:59.327 14:47:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:10:59.327 14:47:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:59.327 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:59.328 14:47:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:59.328 14:47:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:10:59.328 14:47:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:59.328 14:47:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:59.328 14:47:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:59.328 14:47:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:59.328 14:47:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:10:59.328 14:47:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:59.328 14:47:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.328 14:47:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.328 14:47:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.328 14:47:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:59.328 14:47:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:10:59.328 14:47:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:59.328 14:47:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:10:59.328 14:47:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:59.328 14:47:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:10:59.328 14:47:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:59.328 14:47:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:10:59.328 14:47:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:59.328 14:47:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:10:59.328 14:47:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:10:59.328 14:47:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:59.328 [2024-12-11 14:47:41.652923] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:10:59.328 Failed to write to /dev/nvme-fabrics: Input/output error 00:10:59.328 could not add new controller: failed to write to nvme-fabrics device 00:10:59.328 14:47:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:10:59.328 14:47:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:59.328 14:47:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:59.328 14:47:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:59.328 14:47:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:10:59.328 14:47:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.328 14:47:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.328 14:47:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.328 14:47:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:59.586 14:47:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:10:59.586 14:47:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:10:59.586 14:47:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:59.586 14:47:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:59.586 14:47:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:02.118 14:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:02.118 14:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:02.118 14:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:02.118 14:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:02.118 14:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:02.118 14:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:02.118 14:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:02.118 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:02.118 14:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:02.118 14:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:02.118 14:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:02.118 14:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:02.118 14:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:02.118 14:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:02.118 14:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:02.118 14:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:02.118 14:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.118 14:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:02.118 14:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.118 14:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:11:02.118 14:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:02.118 14:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:02.118 14:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.118 14:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:02.118 14:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.118 14:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:02.118 14:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.118 14:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:02.118 [2024-12-11 14:47:44.453499] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:02.118 14:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.118 14:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:02.118 14:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.118 14:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:02.119 14:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.119 14:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:02.119 14:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.119 14:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:02.119 14:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.119 14:47:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:02.686 14:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:02.686 14:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:02.686 14:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:02.686 14:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:02.686 14:47:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:04.584 14:47:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:04.584 14:47:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:04.584 14:47:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:04.584 14:47:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:04.584 14:47:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:04.584 14:47:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:04.584 14:47:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:04.584 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:04.584 14:47:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:04.584 14:47:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:04.584 14:47:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:04.584 14:47:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:04.584 14:47:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:04.584 14:47:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:04.584 14:47:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:04.584 14:47:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:04.584 14:47:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.584 14:47:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:04.584 14:47:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.584 14:47:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:04.584 14:47:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.584 14:47:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:04.585 14:47:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.585 14:47:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:04.585 14:47:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:04.585 14:47:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.585 14:47:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:04.585 14:47:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.585 14:47:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:04.585 14:47:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.585 14:47:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:04.585 [2024-12-11 14:47:47.307280] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:04.585 14:47:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.585 14:47:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:04.585 14:47:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.585 14:47:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:04.585 14:47:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.585 14:47:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:04.585 14:47:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.585 14:47:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:04.585 14:47:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.585 14:47:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:05.518 14:47:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:05.518 14:47:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:05.518 14:47:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:05.518 14:47:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:05.518 14:47:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:07.415 14:47:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:07.415 14:47:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:07.415 14:47:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:07.415 14:47:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:07.415 14:47:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:07.415 14:47:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:07.415 14:47:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:07.415 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:07.415 14:47:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:07.415 14:47:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:07.415 14:47:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:07.415 14:47:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:07.415 14:47:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:07.415 14:47:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:07.415 14:47:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:07.415 14:47:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:07.415 14:47:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.415 14:47:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.415 14:47:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.415 14:47:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:07.415 14:47:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.415 14:47:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.415 14:47:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.415 14:47:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:07.415 14:47:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:07.415 14:47:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.415 14:47:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.415 14:47:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.415 14:47:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:07.415 14:47:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.415 14:47:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.415 [2024-12-11 14:47:50.130674] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:07.415 14:47:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.415 14:47:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:07.415 14:47:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.415 14:47:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.415 14:47:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.415 14:47:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:07.415 14:47:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.415 14:47:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.416 14:47:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.416 14:47:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:08.076 14:47:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:08.076 14:47:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:08.076 14:47:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:08.076 14:47:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:08.076 14:47:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:10.003 14:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:10.003 14:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:10.003 14:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:10.003 14:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:10.003 14:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:10.003 14:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:10.003 14:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:10.262 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:10.262 14:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:10.262 14:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:10.262 14:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:10.262 14:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:10.262 14:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:10.262 14:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:10.262 14:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:10.262 14:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:10.262 14:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.262 14:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:10.262 14:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.262 14:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:10.262 14:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.262 14:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:10.262 14:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.262 14:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:10.262 14:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:10.262 14:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.262 14:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:10.262 14:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.262 14:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:10.262 14:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.262 14:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:10.262 [2024-12-11 14:47:52.901553] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:10.262 14:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.262 14:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:10.262 14:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.262 14:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:10.262 14:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.262 14:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:10.262 14:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.262 14:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:10.262 14:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.262 14:47:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:11.196 14:47:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:11.196 14:47:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:11.196 14:47:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:11.196 14:47:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:11.196 14:47:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:13.093 14:47:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:13.093 14:47:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:13.093 14:47:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:13.093 14:47:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:13.093 14:47:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:13.093 14:47:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:13.093 14:47:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:13.093 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:13.093 14:47:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:13.093 14:47:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:13.093 14:47:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:13.093 14:47:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:13.093 14:47:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:13.093 14:47:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:13.093 14:47:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:13.093 14:47:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:13.093 14:47:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.093 14:47:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:13.093 14:47:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.093 14:47:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:13.093 14:47:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.093 14:47:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:13.093 14:47:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.093 14:47:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:13.093 14:47:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:13.093 14:47:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.093 14:47:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:13.093 14:47:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.093 14:47:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:13.093 14:47:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.093 14:47:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:13.093 [2024-12-11 14:47:55.775181] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:13.093 14:47:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.093 14:47:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:13.093 14:47:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.093 14:47:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:13.093 14:47:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.093 14:47:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:13.093 14:47:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.094 14:47:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:13.094 14:47:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.094 14:47:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:13.659 14:47:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:13.659 14:47:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:13.659 14:47:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:13.659 14:47:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:13.659 14:47:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:16.186 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:16.186 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:16.186 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:16.186 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:16.186 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:16.186 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:16.186 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:16.186 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:16.186 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:16.186 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:16.186 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:16.186 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:16.186 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:16.186 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:16.186 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:16.186 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:16.186 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:16.187 [2024-12-11 14:47:58.562037] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:16.187 [2024-12-11 14:47:58.610048] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:16.187 [2024-12-11 14:47:58.658228] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:16.187 [2024-12-11 14:47:58.706382] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:16.187 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.188 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:16.188 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:16.188 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.188 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:16.188 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.188 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:16.188 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.188 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:16.188 [2024-12-11 14:47:58.754571] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:16.188 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.188 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:16.188 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.188 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:16.188 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.188 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:16.188 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.188 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:16.188 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.188 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:16.188 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.188 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:16.188 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.188 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:16.188 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.188 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:16.188 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.188 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:11:16.188 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.188 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:16.188 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.188 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:11:16.188 "tick_rate": 2700000000, 00:11:16.188 "poll_groups": [ 00:11:16.188 { 00:11:16.188 "name": "nvmf_tgt_poll_group_000", 00:11:16.188 "admin_qpairs": 2, 00:11:16.188 "io_qpairs": 84, 00:11:16.188 "current_admin_qpairs": 0, 00:11:16.188 "current_io_qpairs": 0, 00:11:16.188 "pending_bdev_io": 0, 00:11:16.188 "completed_nvme_io": 165, 00:11:16.188 "transports": [ 00:11:16.188 { 00:11:16.188 "trtype": "TCP" 00:11:16.188 } 00:11:16.188 ] 00:11:16.188 }, 00:11:16.188 { 00:11:16.188 "name": "nvmf_tgt_poll_group_001", 00:11:16.188 "admin_qpairs": 2, 00:11:16.188 "io_qpairs": 84, 00:11:16.188 "current_admin_qpairs": 0, 00:11:16.188 "current_io_qpairs": 0, 00:11:16.188 "pending_bdev_io": 0, 00:11:16.188 "completed_nvme_io": 145, 00:11:16.188 "transports": [ 00:11:16.188 { 00:11:16.188 "trtype": "TCP" 00:11:16.188 } 00:11:16.188 ] 00:11:16.188 }, 00:11:16.188 { 00:11:16.188 "name": "nvmf_tgt_poll_group_002", 00:11:16.188 "admin_qpairs": 1, 00:11:16.188 "io_qpairs": 84, 00:11:16.188 "current_admin_qpairs": 0, 00:11:16.188 "current_io_qpairs": 0, 00:11:16.188 "pending_bdev_io": 0, 00:11:16.188 "completed_nvme_io": 135, 00:11:16.188 "transports": [ 00:11:16.188 { 00:11:16.188 "trtype": "TCP" 00:11:16.188 } 00:11:16.188 ] 00:11:16.188 }, 00:11:16.188 { 00:11:16.188 "name": "nvmf_tgt_poll_group_003", 00:11:16.188 "admin_qpairs": 2, 00:11:16.188 "io_qpairs": 84, 00:11:16.188 "current_admin_qpairs": 0, 00:11:16.188 "current_io_qpairs": 0, 00:11:16.188 "pending_bdev_io": 0, 00:11:16.188 "completed_nvme_io": 241, 00:11:16.188 "transports": [ 00:11:16.188 { 00:11:16.188 "trtype": "TCP" 00:11:16.188 } 00:11:16.188 ] 00:11:16.188 } 00:11:16.188 ] 00:11:16.188 }' 00:11:16.188 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:11:16.188 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:16.188 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:16.188 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:16.188 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:11:16.188 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:11:16.188 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:16.188 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:16.188 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:16.188 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:11:16.188 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:11:16.188 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:11:16.188 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:11:16.188 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:16.188 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:11:16.188 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:16.188 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:11:16.188 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:16.188 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:16.188 rmmod nvme_tcp 00:11:16.188 rmmod nvme_fabrics 00:11:16.188 rmmod nvme_keyring 00:11:16.188 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:16.188 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:11:16.188 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:11:16.188 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 624899 ']' 00:11:16.188 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 624899 00:11:16.188 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 624899 ']' 00:11:16.188 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 624899 00:11:16.188 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:11:16.188 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:16.188 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 624899 00:11:16.447 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:16.447 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:16.447 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 624899' 00:11:16.447 killing process with pid 624899 00:11:16.447 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 624899 00:11:16.447 14:47:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 624899 00:11:16.447 14:47:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:16.447 14:47:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:16.447 14:47:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:16.447 14:47:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:11:16.706 14:47:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:11:16.706 14:47:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:16.707 14:47:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:11:16.707 14:47:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:16.707 14:47:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:16.707 14:47:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:16.707 14:47:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:16.707 14:47:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:18.614 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:18.614 00:11:18.614 real 0m25.510s 00:11:18.614 user 1m22.671s 00:11:18.614 sys 0m4.148s 00:11:18.614 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:18.614 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:18.614 ************************************ 00:11:18.614 END TEST nvmf_rpc 00:11:18.614 ************************************ 00:11:18.615 14:48:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:11:18.615 14:48:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:18.615 14:48:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:18.615 14:48:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:18.615 ************************************ 00:11:18.615 START TEST nvmf_invalid 00:11:18.615 ************************************ 00:11:18.615 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:11:18.615 * Looking for test storage... 00:11:18.615 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:18.615 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:18.615 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lcov --version 00:11:18.615 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:18.874 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:18.874 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:18.874 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:18.874 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:18.874 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:11:18.874 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:11:18.874 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:11:18.874 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:11:18.874 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:11:18.874 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:11:18.874 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:11:18.874 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:18.874 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:11:18.874 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:11:18.874 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:18.874 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:18.874 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:11:18.874 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:11:18.874 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:18.874 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:11:18.874 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:11:18.874 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:11:18.874 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:11:18.874 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:18.874 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:11:18.874 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:11:18.874 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:18.874 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:18.874 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:11:18.874 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:18.874 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:18.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.874 --rc genhtml_branch_coverage=1 00:11:18.874 --rc genhtml_function_coverage=1 00:11:18.874 --rc genhtml_legend=1 00:11:18.874 --rc geninfo_all_blocks=1 00:11:18.874 --rc geninfo_unexecuted_blocks=1 00:11:18.874 00:11:18.874 ' 00:11:18.874 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:18.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.874 --rc genhtml_branch_coverage=1 00:11:18.874 --rc genhtml_function_coverage=1 00:11:18.874 --rc genhtml_legend=1 00:11:18.874 --rc geninfo_all_blocks=1 00:11:18.874 --rc geninfo_unexecuted_blocks=1 00:11:18.874 00:11:18.874 ' 00:11:18.874 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:18.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.874 --rc genhtml_branch_coverage=1 00:11:18.874 --rc genhtml_function_coverage=1 00:11:18.874 --rc genhtml_legend=1 00:11:18.874 --rc geninfo_all_blocks=1 00:11:18.874 --rc geninfo_unexecuted_blocks=1 00:11:18.874 00:11:18.874 ' 00:11:18.874 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:18.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.874 --rc genhtml_branch_coverage=1 00:11:18.874 --rc genhtml_function_coverage=1 00:11:18.874 --rc genhtml_legend=1 00:11:18.874 --rc geninfo_all_blocks=1 00:11:18.874 --rc geninfo_unexecuted_blocks=1 00:11:18.874 00:11:18.874 ' 00:11:18.874 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:18.874 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:11:18.874 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:18.874 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:18.874 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:18.874 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:18.874 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:18.874 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:18.874 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:18.874 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:18.874 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:18.874 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:18.874 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:18.874 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:18.874 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:18.874 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:18.874 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:18.874 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:18.874 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:18.874 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:11:18.874 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:18.874 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:18.874 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:18.874 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.874 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.874 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.874 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:11:18.874 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.874 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:11:18.874 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:18.874 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:18.874 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:18.874 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:18.874 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:18.874 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:18.874 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:18.874 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:18.874 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:18.874 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:18.874 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:18.874 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:18.874 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:11:18.874 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:11:18.874 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:11:18.874 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:11:18.874 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:18.874 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:18.874 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:18.875 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:18.875 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:18.875 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:18.875 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:18.875 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:18.875 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:18.875 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:18.875 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:11:18.875 14:48:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:20.779 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:20.779 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:11:20.779 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:21.042 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:21.042 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:21.042 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:21.042 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:21.042 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:11:21.042 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:21.042 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:11:21.042 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:11:21.042 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:11:21.042 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:11:21.042 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:11:21.042 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:11:21.042 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:21.042 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:21.042 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:21.042 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:21.042 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:21.042 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:21.042 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:21.042 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:21.042 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:21.042 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:21.042 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:21.042 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:21.042 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:21.042 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:21.042 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:21.042 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:21.042 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:21.042 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:21.042 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:21.042 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:21.042 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:21.042 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:21.042 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:21.042 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:21.042 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:21.042 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:21.042 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:21.042 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:21.042 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:21.042 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:21.042 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:21.042 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:21.042 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:21.042 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:21.042 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:21.042 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:21.042 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:21.042 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:21.042 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:21.042 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:21.042 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:21.042 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:21.042 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:21.042 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:21.042 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:21.042 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:21.042 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:21.042 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:21.042 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:21.042 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:21.042 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:21.042 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:21.042 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:21.042 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:21.042 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:21.042 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:21.043 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:21.043 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:21.043 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:11:21.043 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:21.043 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:21.043 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:21.043 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:21.043 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:21.043 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:21.043 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:21.043 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:21.043 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:21.043 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:21.043 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:21.043 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:21.043 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:21.043 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:21.043 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:21.043 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:21.043 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:21.043 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:21.043 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:21.043 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:21.043 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:21.043 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:21.043 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:21.043 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:21.043 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:21.043 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:21.043 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:21.043 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.249 ms 00:11:21.043 00:11:21.043 --- 10.0.0.2 ping statistics --- 00:11:21.043 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:21.043 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:11:21.043 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:21.043 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:21.043 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:11:21.043 00:11:21.043 --- 10.0.0.1 ping statistics --- 00:11:21.043 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:21.043 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:11:21.043 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:21.043 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:11:21.043 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:21.043 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:21.043 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:21.043 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:21.043 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:21.043 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:21.043 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:21.043 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:11:21.043 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:21.043 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:21.043 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:21.043 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=629511 00:11:21.043 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:21.043 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 629511 00:11:21.043 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 629511 ']' 00:11:21.043 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:21.043 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:21.043 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:21.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:21.043 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:21.043 14:48:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:21.043 [2024-12-11 14:48:03.781697] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:11:21.043 [2024-12-11 14:48:03.781794] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:21.301 [2024-12-11 14:48:03.861300] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:21.301 [2024-12-11 14:48:03.921582] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:21.301 [2024-12-11 14:48:03.921645] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:21.301 [2024-12-11 14:48:03.921660] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:21.301 [2024-12-11 14:48:03.921672] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:21.301 [2024-12-11 14:48:03.921682] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:21.301 [2024-12-11 14:48:03.923314] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:11:21.301 [2024-12-11 14:48:03.923371] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:11:21.301 [2024-12-11 14:48:03.923439] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:11:21.301 [2024-12-11 14:48:03.923442] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:21.301 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:21.301 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:11:21.301 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:21.302 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:21.302 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:21.559 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:21.559 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:21.559 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode29322 00:11:21.816 [2024-12-11 14:48:04.355431] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:11:21.816 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:11:21.816 { 00:11:21.816 "nqn": "nqn.2016-06.io.spdk:cnode29322", 00:11:21.816 "tgt_name": "foobar", 00:11:21.816 "method": "nvmf_create_subsystem", 00:11:21.816 "req_id": 1 00:11:21.816 } 00:11:21.816 Got JSON-RPC error response 00:11:21.816 response: 00:11:21.816 { 00:11:21.816 "code": -32603, 00:11:21.816 "message": "Unable to find target foobar" 00:11:21.816 }' 00:11:21.816 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:11:21.816 { 00:11:21.816 "nqn": "nqn.2016-06.io.spdk:cnode29322", 00:11:21.816 "tgt_name": "foobar", 00:11:21.816 "method": "nvmf_create_subsystem", 00:11:21.816 "req_id": 1 00:11:21.816 } 00:11:21.816 Got JSON-RPC error response 00:11:21.816 response: 00:11:21.816 { 00:11:21.816 "code": -32603, 00:11:21.816 "message": "Unable to find target foobar" 00:11:21.816 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:11:21.816 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:11:21.816 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode24030 00:11:22.074 [2024-12-11 14:48:04.628390] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24030: invalid serial number 'SPDKISFASTANDAWESOME' 00:11:22.074 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:11:22.074 { 00:11:22.074 "nqn": "nqn.2016-06.io.spdk:cnode24030", 00:11:22.074 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:11:22.074 "method": "nvmf_create_subsystem", 00:11:22.074 "req_id": 1 00:11:22.074 } 00:11:22.074 Got JSON-RPC error response 00:11:22.074 response: 00:11:22.074 { 00:11:22.074 "code": -32602, 00:11:22.074 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:11:22.074 }' 00:11:22.074 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:11:22.074 { 00:11:22.074 "nqn": "nqn.2016-06.io.spdk:cnode24030", 00:11:22.074 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:11:22.074 "method": "nvmf_create_subsystem", 00:11:22.074 "req_id": 1 00:11:22.074 } 00:11:22.074 Got JSON-RPC error response 00:11:22.074 response: 00:11:22.074 { 00:11:22.074 "code": -32602, 00:11:22.074 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:11:22.074 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:11:22.074 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:11:22.074 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode9584 00:11:22.332 [2024-12-11 14:48:04.893226] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9584: invalid model number 'SPDK_Controller' 00:11:22.332 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:11:22.332 { 00:11:22.332 "nqn": "nqn.2016-06.io.spdk:cnode9584", 00:11:22.332 "model_number": "SPDK_Controller\u001f", 00:11:22.332 "method": "nvmf_create_subsystem", 00:11:22.332 "req_id": 1 00:11:22.332 } 00:11:22.332 Got JSON-RPC error response 00:11:22.332 response: 00:11:22.332 { 00:11:22.332 "code": -32602, 00:11:22.332 "message": "Invalid MN SPDK_Controller\u001f" 00:11:22.332 }' 00:11:22.332 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:11:22.332 { 00:11:22.332 "nqn": "nqn.2016-06.io.spdk:cnode9584", 00:11:22.332 "model_number": "SPDK_Controller\u001f", 00:11:22.332 "method": "nvmf_create_subsystem", 00:11:22.332 "req_id": 1 00:11:22.332 } 00:11:22.332 Got JSON-RPC error response 00:11:22.332 response: 00:11:22.332 { 00:11:22.332 "code": -32602, 00:11:22.332 "message": "Invalid MN SPDK_Controller\u001f" 00:11:22.332 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:11:22.332 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:11:22.332 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:11:22.332 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:11:22.332 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:11:22.332 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:11:22.332 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:11:22.332 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.332 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:11:22.332 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:11:22.332 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:11:22.332 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.332 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.332 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:11:22.332 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:11:22.332 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:11:22.332 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.332 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.332 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:11:22.332 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:11:22.332 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:11:22.332 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.332 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.332 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:11:22.332 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:11:22.332 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:11:22.332 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.332 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.332 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:11:22.332 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:11:22.332 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:11:22.332 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.332 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.332 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:11:22.332 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:11:22.332 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:11:22.332 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.332 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.332 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:11:22.332 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:11:22.332 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:11:22.332 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.332 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.332 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:11:22.332 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:11:22.332 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:11:22.333 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.333 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.333 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:11:22.333 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:11:22.333 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:11:22.333 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.333 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.333 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:11:22.333 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:11:22.333 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:11:22.333 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.333 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.333 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:11:22.333 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:11:22.333 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:11:22.333 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.333 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.333 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:11:22.333 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:11:22.333 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:11:22.333 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.333 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.333 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:11:22.333 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:11:22.333 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:11:22.333 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.333 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.333 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:11:22.333 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:11:22.333 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:11:22.333 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.333 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.333 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:11:22.333 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:11:22.333 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:11:22.333 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.333 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.333 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:11:22.333 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:11:22.333 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:11:22.333 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.333 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.333 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:11:22.333 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:11:22.333 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:11:22.333 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.333 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.333 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:11:22.333 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:11:22.333 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:11:22.333 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.333 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.333 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:11:22.333 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:11:22.333 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:11:22.333 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.333 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.333 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:11:22.333 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:11:22.333 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:11:22.333 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.333 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.333 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:11:22.333 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:11:22.333 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:11:22.333 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.333 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.333 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ w == \- ]] 00:11:22.333 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'w[.w1LY6<klGYS;K3&5va' 00:11:22.333 14:48:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'w[.w1LY6<klGYS;K3&5va' nqn.2016-06.io.spdk:cnode2669 00:11:22.593 [2024-12-11 14:48:05.242418] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2669: invalid serial number 'w[.w1LY6<klGYS;K3&5va' 00:11:22.593 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:11:22.593 { 00:11:22.593 "nqn": "nqn.2016-06.io.spdk:cnode2669", 00:11:22.593 "serial_number": "w[.w1LY6<klGYS;K3&5va", 00:11:22.593 "method": "nvmf_create_subsystem", 00:11:22.593 "req_id": 1 00:11:22.593 } 00:11:22.593 Got JSON-RPC error response 00:11:22.593 response: 00:11:22.593 { 00:11:22.593 "code": -32602, 00:11:22.593 "message": "Invalid SN w[.w1LY6<klGYS;K3&5va" 00:11:22.593 }' 00:11:22.593 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:11:22.593 { 00:11:22.593 "nqn": "nqn.2016-06.io.spdk:cnode2669", 00:11:22.593 "serial_number": "w[.w1LY6<klGYS;K3&5va", 00:11:22.593 "method": "nvmf_create_subsystem", 00:11:22.593 "req_id": 1 00:11:22.593 } 00:11:22.594 Got JSON-RPC error response 00:11:22.594 response: 00:11:22.594 { 00:11:22.594 "code": -32602, 00:11:22.594 "message": "Invalid SN w[.w1LY6<klGYS;K3&5va" 00:11:22.594 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:11:22.594 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:11:22.594 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:11:22.594 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:11:22.594 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:11:22.594 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:11:22.594 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:11:22.594 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.594 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:11:22.594 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:11:22.594 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:11:22.594 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.594 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.594 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:11:22.594 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:11:22.594 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:11:22.594 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.594 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.594 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:11:22.594 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:11:22.594 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:11:22.594 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.594 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.594 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:11:22.594 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:11:22.594 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:11:22.594 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.594 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.594 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:11:22.594 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:11:22.594 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:11:22.594 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.594 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.594 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:11:22.594 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:11:22.594 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:11:22.594 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.594 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.594 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:11:22.594 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:11:22.594 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:11:22.594 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.594 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.594 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:11:22.594 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:11:22.594 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:11:22.594 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.594 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.594 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:11:22.594 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:11:22.594 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:11:22.594 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.594 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.594 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:11:22.594 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:11:22.594 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:11:22.594 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.594 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.594 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:11:22.594 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:11:22.594 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:11:22.594 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.594 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.594 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:11:22.594 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:11:22.594 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:11:22.594 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.594 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.594 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:11:22.594 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:11:22.594 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:11:22.594 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.594 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.594 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:11:22.594 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:11:22.594 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:11:22.594 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.594 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.594 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:11:22.594 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:11:22.594 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:11:22.595 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.595 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.595 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:11:22.595 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:11:22.595 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:11:22.595 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.595 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.595 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:11:22.595 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:11:22.595 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:11:22.595 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.595 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.595 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:11:22.595 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:11:22.595 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:11:22.595 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.595 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.595 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:11:22.595 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:11:22.595 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:11:22.595 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.595 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.595 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:11:22.595 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:11:22.595 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:11:22.595 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.595 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.595 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:11:22.595 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:11:22.595 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:11:22.595 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.595 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.595 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:11:22.595 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:11:22.595 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:11:22.595 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.595 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.595 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:11:22.595 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:11:22.595 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:11:22.595 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.595 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.595 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:11:22.595 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:11:22.595 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:11:22.595 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.595 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.595 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:11:22.595 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:11:22.595 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:11:22.595 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.595 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.595 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:11:22.595 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:11:22.854 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:11:22.854 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.854 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.854 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:11:22.854 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:11:22.854 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:11:22.854 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.854 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.854 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:11:22.854 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:11:22.854 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:11:22.854 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.854 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.854 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:11:22.854 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:11:22.854 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:11:22.854 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.854 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.854 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:11:22.854 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:11:22.854 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:11:22.854 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.854 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.854 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:11:22.854 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:11:22.854 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:11:22.854 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.854 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.854 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:11:22.854 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:11:22.854 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:11:22.854 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.854 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.854 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:11:22.854 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:11:22.854 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:11:22.854 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.854 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.854 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:11:22.854 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:11:22.854 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:11:22.854 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.854 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.854 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:11:22.854 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:11:22.854 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:11:22.854 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.854 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.854 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:11:22.854 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:11:22.854 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:11:22.854 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.854 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.854 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:11:22.854 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:11:22.854 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:11:22.854 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.854 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.854 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:11:22.854 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:11:22.854 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:11:22.854 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.854 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.854 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:11:22.854 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:11:22.854 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:11:22.854 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.854 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.854 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:11:22.854 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:11:22.854 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:11:22.854 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.854 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.854 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:11:22.854 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:11:22.854 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:11:22.854 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:22.854 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:22.854 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ N == \- ]] 00:11:22.854 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'Nq1*zXl$~l)?Nt7{rfN;:_a6A+N+Xm+n`>{WYlM Y' 00:11:22.854 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'Nq1*zXl$~l)?Nt7{rfN;:_a6A+N+Xm+n`>{WYlM Y' nqn.2016-06.io.spdk:cnode17855 00:11:23.112 [2024-12-11 14:48:05.663777] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17855: invalid model number 'Nq1*zXl$~l)?Nt7{rfN;:_a6A+N+Xm+n`>{WYlM Y' 00:11:23.112 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:11:23.112 { 00:11:23.112 "nqn": "nqn.2016-06.io.spdk:cnode17855", 00:11:23.112 "model_number": "Nq1*zXl$~l)?Nt7{rfN;:_a6A+N+Xm+n`>{WYlM Y", 00:11:23.112 "method": "nvmf_create_subsystem", 00:11:23.112 "req_id": 1 00:11:23.112 } 00:11:23.112 Got JSON-RPC error response 00:11:23.112 response: 00:11:23.112 { 00:11:23.112 "code": -32602, 00:11:23.112 "message": "Invalid MN Nq1*zXl$~l)?Nt7{rfN;:_a6A+N+Xm+n`>{WYlM Y" 00:11:23.112 }' 00:11:23.112 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:11:23.112 { 00:11:23.112 "nqn": "nqn.2016-06.io.spdk:cnode17855", 00:11:23.112 "model_number": "Nq1*zXl$~l)?Nt7{rfN;:_a6A+N+Xm+n`>{WYlM Y", 00:11:23.112 "method": "nvmf_create_subsystem", 00:11:23.112 "req_id": 1 00:11:23.112 } 00:11:23.112 Got JSON-RPC error response 00:11:23.112 response: 00:11:23.112 { 00:11:23.112 "code": -32602, 00:11:23.112 "message": "Invalid MN Nq1*zXl$~l)?Nt7{rfN;:_a6A+N+Xm+n`>{WYlM Y" 00:11:23.112 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:11:23.112 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:11:23.370 [2024-12-11 14:48:05.924717] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:23.370 14:48:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:11:23.627 14:48:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:11:23.627 14:48:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:11:23.628 14:48:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:11:23.628 14:48:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:11:23.628 14:48:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:11:23.885 [2024-12-11 14:48:06.474472] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:11:23.885 14:48:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:11:23.885 { 00:11:23.885 "nqn": "nqn.2016-06.io.spdk:cnode", 00:11:23.885 "listen_address": { 00:11:23.885 "trtype": "tcp", 00:11:23.885 "traddr": "", 00:11:23.885 "trsvcid": "4421" 00:11:23.885 }, 00:11:23.885 "method": "nvmf_subsystem_remove_listener", 00:11:23.885 "req_id": 1 00:11:23.885 } 00:11:23.885 Got JSON-RPC error response 00:11:23.885 response: 00:11:23.885 { 00:11:23.885 "code": -32602, 00:11:23.885 "message": "Invalid parameters" 00:11:23.885 }' 00:11:23.885 14:48:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:11:23.885 { 00:11:23.885 "nqn": "nqn.2016-06.io.spdk:cnode", 00:11:23.885 "listen_address": { 00:11:23.885 "trtype": "tcp", 00:11:23.885 "traddr": "", 00:11:23.885 "trsvcid": "4421" 00:11:23.885 }, 00:11:23.885 "method": "nvmf_subsystem_remove_listener", 00:11:23.885 "req_id": 1 00:11:23.885 } 00:11:23.885 Got JSON-RPC error response 00:11:23.885 response: 00:11:23.885 { 00:11:23.885 "code": -32602, 00:11:23.885 "message": "Invalid parameters" 00:11:23.885 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:11:23.885 14:48:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode14237 -i 0 00:11:24.143 [2024-12-11 14:48:06.751354] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14237: invalid cntlid range [0-65519] 00:11:24.143 14:48:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:11:24.143 { 00:11:24.143 "nqn": "nqn.2016-06.io.spdk:cnode14237", 00:11:24.143 "min_cntlid": 0, 00:11:24.143 "method": "nvmf_create_subsystem", 00:11:24.143 "req_id": 1 00:11:24.143 } 00:11:24.143 Got JSON-RPC error response 00:11:24.143 response: 00:11:24.143 { 00:11:24.143 "code": -32602, 00:11:24.143 "message": "Invalid cntlid range [0-65519]" 00:11:24.143 }' 00:11:24.143 14:48:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:11:24.143 { 00:11:24.143 "nqn": "nqn.2016-06.io.spdk:cnode14237", 00:11:24.143 "min_cntlid": 0, 00:11:24.143 "method": "nvmf_create_subsystem", 00:11:24.143 "req_id": 1 00:11:24.143 } 00:11:24.143 Got JSON-RPC error response 00:11:24.143 response: 00:11:24.143 { 00:11:24.143 "code": -32602, 00:11:24.143 "message": "Invalid cntlid range [0-65519]" 00:11:24.143 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:24.143 14:48:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5922 -i 65520 00:11:24.401 [2024-12-11 14:48:07.024298] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5922: invalid cntlid range [65520-65519] 00:11:24.401 14:48:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:11:24.401 { 00:11:24.401 "nqn": "nqn.2016-06.io.spdk:cnode5922", 00:11:24.401 "min_cntlid": 65520, 00:11:24.401 "method": "nvmf_create_subsystem", 00:11:24.401 "req_id": 1 00:11:24.401 } 00:11:24.401 Got JSON-RPC error response 00:11:24.401 response: 00:11:24.401 { 00:11:24.401 "code": -32602, 00:11:24.401 "message": "Invalid cntlid range [65520-65519]" 00:11:24.401 }' 00:11:24.401 14:48:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:11:24.401 { 00:11:24.401 "nqn": "nqn.2016-06.io.spdk:cnode5922", 00:11:24.401 "min_cntlid": 65520, 00:11:24.401 "method": "nvmf_create_subsystem", 00:11:24.401 "req_id": 1 00:11:24.401 } 00:11:24.401 Got JSON-RPC error response 00:11:24.401 response: 00:11:24.401 { 00:11:24.401 "code": -32602, 00:11:24.401 "message": "Invalid cntlid range [65520-65519]" 00:11:24.401 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:24.401 14:48:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode115 -I 0 00:11:24.658 [2024-12-11 14:48:07.321261] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode115: invalid cntlid range [1-0] 00:11:24.658 14:48:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:11:24.658 { 00:11:24.658 "nqn": "nqn.2016-06.io.spdk:cnode115", 00:11:24.658 "max_cntlid": 0, 00:11:24.658 "method": "nvmf_create_subsystem", 00:11:24.658 "req_id": 1 00:11:24.658 } 00:11:24.658 Got JSON-RPC error response 00:11:24.658 response: 00:11:24.658 { 00:11:24.658 "code": -32602, 00:11:24.658 "message": "Invalid cntlid range [1-0]" 00:11:24.658 }' 00:11:24.658 14:48:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:11:24.658 { 00:11:24.658 "nqn": "nqn.2016-06.io.spdk:cnode115", 00:11:24.658 "max_cntlid": 0, 00:11:24.658 "method": "nvmf_create_subsystem", 00:11:24.658 "req_id": 1 00:11:24.658 } 00:11:24.658 Got JSON-RPC error response 00:11:24.658 response: 00:11:24.658 { 00:11:24.658 "code": -32602, 00:11:24.658 "message": "Invalid cntlid range [1-0]" 00:11:24.658 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:24.658 14:48:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4816 -I 65520 00:11:24.916 [2024-12-11 14:48:07.602183] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4816: invalid cntlid range [1-65520] 00:11:24.916 14:48:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:11:24.916 { 00:11:24.916 "nqn": "nqn.2016-06.io.spdk:cnode4816", 00:11:24.916 "max_cntlid": 65520, 00:11:24.916 "method": "nvmf_create_subsystem", 00:11:24.916 "req_id": 1 00:11:24.916 } 00:11:24.916 Got JSON-RPC error response 00:11:24.916 response: 00:11:24.916 { 00:11:24.916 "code": -32602, 00:11:24.916 "message": "Invalid cntlid range [1-65520]" 00:11:24.916 }' 00:11:24.916 14:48:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:11:24.916 { 00:11:24.916 "nqn": "nqn.2016-06.io.spdk:cnode4816", 00:11:24.916 "max_cntlid": 65520, 00:11:24.916 "method": "nvmf_create_subsystem", 00:11:24.916 "req_id": 1 00:11:24.916 } 00:11:24.916 Got JSON-RPC error response 00:11:24.916 response: 00:11:24.916 { 00:11:24.916 "code": -32602, 00:11:24.916 "message": "Invalid cntlid range [1-65520]" 00:11:24.916 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:24.916 14:48:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2932 -i 6 -I 5 00:11:25.173 [2024-12-11 14:48:07.891181] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2932: invalid cntlid range [6-5] 00:11:25.173 14:48:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:11:25.173 { 00:11:25.173 "nqn": "nqn.2016-06.io.spdk:cnode2932", 00:11:25.173 "min_cntlid": 6, 00:11:25.173 "max_cntlid": 5, 00:11:25.173 "method": "nvmf_create_subsystem", 00:11:25.173 "req_id": 1 00:11:25.173 } 00:11:25.173 Got JSON-RPC error response 00:11:25.173 response: 00:11:25.173 { 00:11:25.173 "code": -32602, 00:11:25.173 "message": "Invalid cntlid range [6-5]" 00:11:25.173 }' 00:11:25.173 14:48:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:11:25.173 { 00:11:25.173 "nqn": "nqn.2016-06.io.spdk:cnode2932", 00:11:25.173 "min_cntlid": 6, 00:11:25.173 "max_cntlid": 5, 00:11:25.173 "method": "nvmf_create_subsystem", 00:11:25.173 "req_id": 1 00:11:25.173 } 00:11:25.173 Got JSON-RPC error response 00:11:25.174 response: 00:11:25.174 { 00:11:25.174 "code": -32602, 00:11:25.174 "message": "Invalid cntlid range [6-5]" 00:11:25.174 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:25.174 14:48:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:11:25.431 14:48:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:11:25.431 { 00:11:25.431 "name": "foobar", 00:11:25.431 "method": "nvmf_delete_target", 00:11:25.431 "req_id": 1 00:11:25.431 } 00:11:25.431 Got JSON-RPC error response 00:11:25.431 response: 00:11:25.431 { 00:11:25.431 "code": -32602, 00:11:25.432 "message": "The specified target doesn'\''t exist, cannot delete it." 00:11:25.432 }' 00:11:25.432 14:48:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:11:25.432 { 00:11:25.432 "name": "foobar", 00:11:25.432 "method": "nvmf_delete_target", 00:11:25.432 "req_id": 1 00:11:25.432 } 00:11:25.432 Got JSON-RPC error response 00:11:25.432 response: 00:11:25.432 { 00:11:25.432 "code": -32602, 00:11:25.432 "message": "The specified target doesn't exist, cannot delete it." 00:11:25.432 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:11:25.432 14:48:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:11:25.432 14:48:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:11:25.432 14:48:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:25.432 14:48:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:11:25.432 14:48:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:25.432 14:48:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:11:25.432 14:48:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:25.432 14:48:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:25.432 rmmod nvme_tcp 00:11:25.432 rmmod nvme_fabrics 00:11:25.432 rmmod nvme_keyring 00:11:25.432 14:48:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:25.432 14:48:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:11:25.432 14:48:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:11:25.432 14:48:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 629511 ']' 00:11:25.432 14:48:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 629511 00:11:25.432 14:48:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 629511 ']' 00:11:25.432 14:48:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 629511 00:11:25.432 14:48:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:11:25.432 14:48:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:25.432 14:48:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 629511 00:11:25.432 14:48:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:25.432 14:48:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:25.432 14:48:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 629511' 00:11:25.432 killing process with pid 629511 00:11:25.432 14:48:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 629511 00:11:25.432 14:48:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 629511 00:11:25.690 14:48:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:25.690 14:48:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:25.690 14:48:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:25.690 14:48:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:11:25.690 14:48:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:11:25.690 14:48:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:25.690 14:48:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:11:25.690 14:48:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:25.690 14:48:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:25.690 14:48:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:25.690 14:48:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:25.690 14:48:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:28.230 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:28.230 00:11:28.230 real 0m9.082s 00:11:28.230 user 0m21.782s 00:11:28.230 sys 0m2.553s 00:11:28.230 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:28.230 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:28.230 ************************************ 00:11:28.230 END TEST nvmf_invalid 00:11:28.230 ************************************ 00:11:28.230 14:48:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:11:28.230 14:48:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:28.231 14:48:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:28.231 14:48:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:28.231 ************************************ 00:11:28.231 START TEST nvmf_connect_stress 00:11:28.231 ************************************ 00:11:28.231 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:11:28.231 * Looking for test storage... 00:11:28.231 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:28.231 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:28.231 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:11:28.231 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:28.231 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:28.231 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:28.231 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:28.231 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:28.231 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:11:28.231 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:11:28.231 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:11:28.231 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:11:28.231 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:11:28.231 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:11:28.231 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:11:28.231 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:28.231 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:11:28.231 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:11:28.231 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:28.231 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:28.231 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:11:28.231 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:11:28.231 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:28.231 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:11:28.231 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:11:28.231 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:11:28.231 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:11:28.231 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:28.231 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:11:28.231 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:11:28.231 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:28.231 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:28.231 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:11:28.231 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:28.231 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:28.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.231 --rc genhtml_branch_coverage=1 00:11:28.231 --rc genhtml_function_coverage=1 00:11:28.231 --rc genhtml_legend=1 00:11:28.231 --rc geninfo_all_blocks=1 00:11:28.231 --rc geninfo_unexecuted_blocks=1 00:11:28.231 00:11:28.231 ' 00:11:28.231 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:28.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.231 --rc genhtml_branch_coverage=1 00:11:28.231 --rc genhtml_function_coverage=1 00:11:28.231 --rc genhtml_legend=1 00:11:28.231 --rc geninfo_all_blocks=1 00:11:28.231 --rc geninfo_unexecuted_blocks=1 00:11:28.231 00:11:28.231 ' 00:11:28.231 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:28.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.231 --rc genhtml_branch_coverage=1 00:11:28.231 --rc genhtml_function_coverage=1 00:11:28.231 --rc genhtml_legend=1 00:11:28.231 --rc geninfo_all_blocks=1 00:11:28.231 --rc geninfo_unexecuted_blocks=1 00:11:28.231 00:11:28.231 ' 00:11:28.231 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:28.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.231 --rc genhtml_branch_coverage=1 00:11:28.231 --rc genhtml_function_coverage=1 00:11:28.231 --rc genhtml_legend=1 00:11:28.231 --rc geninfo_all_blocks=1 00:11:28.231 --rc geninfo_unexecuted_blocks=1 00:11:28.231 00:11:28.231 ' 00:11:28.231 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:28.231 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:11:28.231 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:28.231 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:28.231 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:28.231 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:28.231 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:28.231 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:28.231 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:28.231 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:28.231 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:28.231 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:28.231 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:28.231 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:28.231 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:28.231 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:28.231 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:28.231 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:28.231 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:28.231 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:11:28.231 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:28.231 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:28.231 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:28.231 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.231 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.231 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.231 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:11:28.231 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.232 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:11:28.232 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:28.232 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:28.232 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:28.232 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:28.232 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:28.232 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:28.232 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:28.232 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:28.232 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:28.232 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:28.232 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:11:28.232 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:28.232 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:28.232 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:28.232 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:28.232 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:28.232 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:28.232 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:28.232 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:28.232 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:28.232 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:28.232 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:11:28.232 14:48:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:30.135 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:30.135 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:11:30.135 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:30.135 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:30.135 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:30.135 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:30.135 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:30.135 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:11:30.135 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:30.135 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:11:30.135 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:11:30.135 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:11:30.135 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:11:30.135 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:11:30.135 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:11:30.135 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:30.135 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:30.135 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:30.135 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:30.135 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:30.135 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:30.135 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:30.135 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:30.135 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:30.135 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:30.135 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:30.135 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:30.135 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:30.136 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:30.136 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:30.136 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:30.136 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:30.136 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:30.136 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:30.136 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:30.136 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:30.136 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:30.136 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:30.136 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:30.136 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:30.136 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:30.136 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:30.136 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:30.136 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:30.136 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:30.136 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:30.136 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:30.136 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:30.136 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:30.136 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:30.136 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:30.136 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:30.136 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:30.136 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:30.136 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:30.136 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:30.136 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:30.136 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:30.136 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:30.136 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:30.136 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:30.136 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:30.136 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:30.136 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:30.136 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:30.136 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:30.136 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:30.136 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:30.136 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:30.136 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:30.136 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:30.136 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:30.136 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:30.136 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:11:30.136 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:30.136 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:30.136 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:30.136 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:30.136 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:30.136 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:30.136 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:30.136 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:30.136 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:30.136 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:30.136 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:30.136 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:30.136 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:30.136 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:30.136 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:30.136 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:30.136 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:30.136 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:30.395 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:30.395 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:30.395 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:30.395 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:30.395 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:30.395 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:30.395 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:30.395 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:30.395 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:30.395 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.310 ms 00:11:30.395 00:11:30.395 --- 10.0.0.2 ping statistics --- 00:11:30.395 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:30.395 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:11:30.395 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:30.395 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:30.395 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:11:30.395 00:11:30.395 --- 10.0.0.1 ping statistics --- 00:11:30.395 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:30.395 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:11:30.395 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:30.395 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:11:30.395 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:30.395 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:30.395 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:30.395 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:30.395 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:30.395 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:30.395 14:48:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:30.395 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:11:30.395 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:30.395 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:30.395 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:30.395 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=632780 00:11:30.395 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:30.395 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 632780 00:11:30.395 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 632780 ']' 00:11:30.395 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:30.395 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:30.395 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:30.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:30.395 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:30.395 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:30.395 [2024-12-11 14:48:13.072585] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:11:30.395 [2024-12-11 14:48:13.072657] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:30.395 [2024-12-11 14:48:13.144508] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:30.654 [2024-12-11 14:48:13.206249] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:30.654 [2024-12-11 14:48:13.206305] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:30.654 [2024-12-11 14:48:13.206336] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:30.654 [2024-12-11 14:48:13.206348] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:30.654 [2024-12-11 14:48:13.206359] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:30.654 [2024-12-11 14:48:13.207803] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:11:30.654 [2024-12-11 14:48:13.207865] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:11:30.654 [2024-12-11 14:48:13.207869] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:11:30.654 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:30.654 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:11:30.654 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:30.654 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:30.654 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:30.654 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:30.654 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:30.654 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.654 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:30.654 [2024-12-11 14:48:13.364990] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:30.654 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.654 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:30.654 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.654 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:30.654 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.654 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:30.654 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.654 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:30.654 [2024-12-11 14:48:13.382359] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:30.654 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.654 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:30.654 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.654 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:30.654 NULL1 00:11:30.654 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.654 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=632808 00:11:30.654 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:30.654 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:30.654 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:11:30.654 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:11:30.654 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:30.654 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:30.654 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:30.654 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:30.654 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:30.654 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:30.654 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:30.654 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:30.654 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:30.654 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:30.654 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:30.654 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:30.654 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:30.654 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:30.654 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:30.654 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:30.654 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:30.654 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:30.654 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:30.654 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:30.654 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:30.654 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:30.654 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:30.654 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:30.912 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:30.912 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:30.912 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:30.912 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:30.912 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:30.912 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:30.912 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:30.912 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:30.912 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:30.912 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:30.912 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:30.912 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:30.912 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:30.912 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:30.912 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:30.912 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:30.912 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 632808 00:11:30.912 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:30.912 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.912 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:31.169 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.169 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 632808 00:11:31.169 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:31.169 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.169 14:48:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:31.427 14:48:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.427 14:48:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 632808 00:11:31.427 14:48:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:31.427 14:48:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.427 14:48:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:31.685 14:48:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.685 14:48:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 632808 00:11:31.685 14:48:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:31.685 14:48:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.685 14:48:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:32.250 14:48:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.250 14:48:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 632808 00:11:32.250 14:48:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:32.250 14:48:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.250 14:48:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:32.516 14:48:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.517 14:48:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 632808 00:11:32.517 14:48:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:32.517 14:48:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.517 14:48:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:32.779 14:48:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.779 14:48:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 632808 00:11:32.779 14:48:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:32.779 14:48:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.779 14:48:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:33.037 14:48:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.037 14:48:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 632808 00:11:33.037 14:48:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:33.037 14:48:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.037 14:48:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:33.294 14:48:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.294 14:48:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 632808 00:11:33.294 14:48:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:33.294 14:48:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.294 14:48:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:33.859 14:48:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.859 14:48:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 632808 00:11:33.859 14:48:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:33.859 14:48:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.859 14:48:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:34.117 14:48:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.117 14:48:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 632808 00:11:34.117 14:48:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:34.117 14:48:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.117 14:48:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:34.375 14:48:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.375 14:48:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 632808 00:11:34.375 14:48:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:34.375 14:48:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.375 14:48:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:34.632 14:48:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.632 14:48:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 632808 00:11:34.632 14:48:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:34.632 14:48:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.632 14:48:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:34.890 14:48:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.890 14:48:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 632808 00:11:34.890 14:48:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:34.890 14:48:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.890 14:48:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:35.454 14:48:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.454 14:48:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 632808 00:11:35.454 14:48:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:35.454 14:48:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.454 14:48:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:35.712 14:48:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.712 14:48:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 632808 00:11:35.712 14:48:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:35.712 14:48:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.712 14:48:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:35.969 14:48:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.969 14:48:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 632808 00:11:35.969 14:48:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:35.969 14:48:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.969 14:48:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:36.226 14:48:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.226 14:48:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 632808 00:11:36.226 14:48:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:36.226 14:48:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.226 14:48:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:36.484 14:48:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.484 14:48:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 632808 00:11:36.484 14:48:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:36.484 14:48:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.484 14:48:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:37.051 14:48:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.051 14:48:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 632808 00:11:37.051 14:48:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:37.051 14:48:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.051 14:48:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:37.309 14:48:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.309 14:48:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 632808 00:11:37.309 14:48:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:37.309 14:48:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.309 14:48:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:37.566 14:48:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.566 14:48:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 632808 00:11:37.566 14:48:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:37.566 14:48:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.566 14:48:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:37.823 14:48:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.823 14:48:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 632808 00:11:37.823 14:48:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:37.823 14:48:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.823 14:48:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:38.081 14:48:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.081 14:48:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 632808 00:11:38.081 14:48:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:38.081 14:48:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.081 14:48:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:38.647 14:48:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.647 14:48:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 632808 00:11:38.647 14:48:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:38.647 14:48:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.647 14:48:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:38.904 14:48:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.904 14:48:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 632808 00:11:38.904 14:48:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:38.904 14:48:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.904 14:48:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:39.162 14:48:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.162 14:48:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 632808 00:11:39.162 14:48:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:39.162 14:48:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.162 14:48:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:39.420 14:48:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.420 14:48:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 632808 00:11:39.420 14:48:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:39.420 14:48:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.420 14:48:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:39.678 14:48:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.678 14:48:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 632808 00:11:39.678 14:48:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:39.678 14:48:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.678 14:48:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:40.272 14:48:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.272 14:48:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 632808 00:11:40.272 14:48:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:40.272 14:48:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.272 14:48:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:40.584 14:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.584 14:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 632808 00:11:40.584 14:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:40.584 14:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.584 14:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:40.864 14:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.864 14:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 632808 00:11:40.864 14:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:40.864 14:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.864 14:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:40.864 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:41.121 14:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.121 14:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 632808 00:11:41.121 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (632808) - No such process 00:11:41.121 14:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 632808 00:11:41.121 14:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:41.122 14:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:41.122 14:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:11:41.122 14:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:41.122 14:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:11:41.122 14:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:41.122 14:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:11:41.122 14:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:41.122 14:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:41.122 rmmod nvme_tcp 00:11:41.122 rmmod nvme_fabrics 00:11:41.122 rmmod nvme_keyring 00:11:41.122 14:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:41.122 14:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:11:41.122 14:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:11:41.122 14:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 632780 ']' 00:11:41.122 14:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 632780 00:11:41.122 14:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 632780 ']' 00:11:41.122 14:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 632780 00:11:41.122 14:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:11:41.122 14:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:41.122 14:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 632780 00:11:41.122 14:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:41.122 14:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:41.122 14:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 632780' 00:11:41.122 killing process with pid 632780 00:11:41.122 14:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 632780 00:11:41.122 14:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 632780 00:11:41.382 14:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:41.382 14:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:41.382 14:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:41.382 14:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:11:41.382 14:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:11:41.382 14:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:41.382 14:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:11:41.382 14:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:41.382 14:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:41.382 14:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:41.382 14:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:41.382 14:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:43.921 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:43.921 00:11:43.921 real 0m15.651s 00:11:43.921 user 0m38.659s 00:11:43.921 sys 0m6.022s 00:11:43.921 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:43.921 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:43.921 ************************************ 00:11:43.921 END TEST nvmf_connect_stress 00:11:43.921 ************************************ 00:11:43.921 14:48:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:43.921 14:48:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:43.921 14:48:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:43.921 14:48:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:43.921 ************************************ 00:11:43.921 START TEST nvmf_fused_ordering 00:11:43.921 ************************************ 00:11:43.921 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:43.921 * Looking for test storage... 00:11:43.921 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:43.921 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:43.921 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lcov --version 00:11:43.921 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:43.921 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:43.921 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:43.921 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:43.921 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:43.921 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:11:43.921 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:11:43.921 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:11:43.921 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:11:43.921 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:11:43.921 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:11:43.921 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:11:43.921 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:43.921 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:11:43.921 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:11:43.921 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:43.921 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:43.921 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:11:43.921 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:11:43.921 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:43.922 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:11:43.922 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:11:43.922 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:11:43.922 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:11:43.922 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:43.922 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:11:43.922 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:11:43.922 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:43.922 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:43.922 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:11:43.922 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:43.922 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:43.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.922 --rc genhtml_branch_coverage=1 00:11:43.922 --rc genhtml_function_coverage=1 00:11:43.922 --rc genhtml_legend=1 00:11:43.922 --rc geninfo_all_blocks=1 00:11:43.922 --rc geninfo_unexecuted_blocks=1 00:11:43.922 00:11:43.922 ' 00:11:43.922 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:43.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.922 --rc genhtml_branch_coverage=1 00:11:43.922 --rc genhtml_function_coverage=1 00:11:43.922 --rc genhtml_legend=1 00:11:43.922 --rc geninfo_all_blocks=1 00:11:43.922 --rc geninfo_unexecuted_blocks=1 00:11:43.922 00:11:43.922 ' 00:11:43.922 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:43.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.922 --rc genhtml_branch_coverage=1 00:11:43.922 --rc genhtml_function_coverage=1 00:11:43.922 --rc genhtml_legend=1 00:11:43.922 --rc geninfo_all_blocks=1 00:11:43.922 --rc geninfo_unexecuted_blocks=1 00:11:43.922 00:11:43.922 ' 00:11:43.922 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:43.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.922 --rc genhtml_branch_coverage=1 00:11:43.922 --rc genhtml_function_coverage=1 00:11:43.922 --rc genhtml_legend=1 00:11:43.922 --rc geninfo_all_blocks=1 00:11:43.922 --rc geninfo_unexecuted_blocks=1 00:11:43.922 00:11:43.922 ' 00:11:43.922 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:43.922 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:11:43.922 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:43.922 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:43.922 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:43.922 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:43.922 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:43.922 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:43.922 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:43.922 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:43.922 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:43.922 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:43.922 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:43.922 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:43.922 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:43.922 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:43.922 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:43.922 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:43.922 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:43.922 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:11:43.922 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:43.922 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:43.922 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:43.922 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.922 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.922 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.922 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:11:43.922 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.922 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:11:43.922 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:43.922 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:43.922 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:43.922 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:43.922 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:43.922 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:43.922 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:43.922 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:43.922 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:43.922 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:43.922 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:11:43.922 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:43.922 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:43.922 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:43.922 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:43.922 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:43.922 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:43.922 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:43.922 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:43.922 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:43.922 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:43.922 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:11:43.922 14:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:45.830 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:45.830 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:11:45.830 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:45.830 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:45.830 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:45.830 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:45.830 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:45.830 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:11:45.830 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:45.830 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:11:45.830 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:11:45.830 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:11:45.830 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:11:45.830 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:11:45.830 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:11:45.830 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:45.831 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:45.831 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:45.831 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:45.831 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:45.831 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:45.831 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:45.831 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:45.831 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:45.831 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:45.831 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:45.831 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:45.831 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:45.831 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:45.831 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:45.831 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:45.831 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:45.831 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:45.831 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:45.831 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:45.831 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:45.831 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:45.831 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:45.831 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:45.831 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:45.831 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:45.831 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:45.831 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:45.831 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:45.831 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:45.831 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:45.831 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:45.831 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:45.831 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:45.831 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:45.831 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:45.831 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:45.831 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:45.831 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:45.831 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:45.831 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:45.831 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:45.831 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:45.831 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:45.831 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:45.831 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:45.831 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:45.831 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:45.831 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:45.831 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:45.831 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:45.831 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:45.831 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:45.831 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:45.831 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:45.831 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:45.831 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:45.831 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:45.831 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:11:45.831 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:45.831 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:45.831 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:45.831 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:45.831 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:45.831 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:45.831 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:45.831 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:45.831 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:45.831 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:45.831 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:45.831 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:45.831 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:45.831 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:45.831 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:45.831 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:45.831 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:45.831 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:45.831 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:45.831 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:45.831 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:45.831 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:45.831 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:45.831 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:45.831 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:45.831 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:45.831 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:45.831 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.360 ms 00:11:45.831 00:11:45.831 --- 10.0.0.2 ping statistics --- 00:11:45.831 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:45.831 rtt min/avg/max/mdev = 0.360/0.360/0.360/0.000 ms 00:11:45.831 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:45.831 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:45.831 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:11:45.831 00:11:45.831 --- 10.0.0.1 ping statistics --- 00:11:45.831 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:45.831 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:11:45.831 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:45.831 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:11:45.831 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:45.831 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:45.831 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:45.831 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:45.831 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:45.831 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:45.831 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:45.831 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:11:45.831 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:45.831 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:45.831 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:45.831 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=635971 00:11:45.831 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:45.832 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 635971 00:11:45.832 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 635971 ']' 00:11:45.832 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:45.832 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:45.832 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:45.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:45.832 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:45.832 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:46.090 [2024-12-11 14:48:28.633121] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:11:46.090 [2024-12-11 14:48:28.633220] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:46.090 [2024-12-11 14:48:28.707732] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:46.090 [2024-12-11 14:48:28.766401] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:46.090 [2024-12-11 14:48:28.766457] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:46.090 [2024-12-11 14:48:28.766486] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:46.090 [2024-12-11 14:48:28.766497] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:46.090 [2024-12-11 14:48:28.766514] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:46.090 [2024-12-11 14:48:28.767206] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:11:46.348 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:46.348 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:11:46.348 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:46.348 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:46.348 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:46.348 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:46.348 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:46.348 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.348 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:46.348 [2024-12-11 14:48:28.916558] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:46.348 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.348 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:46.348 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.348 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:46.348 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.348 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:46.348 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.348 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:46.348 [2024-12-11 14:48:28.932799] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:46.348 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.348 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:46.348 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.348 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:46.348 NULL1 00:11:46.348 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.348 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:11:46.348 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.348 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:46.348 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.348 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:11:46.348 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.348 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:46.348 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.348 14:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:46.348 [2024-12-11 14:48:28.979432] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:11:46.348 [2024-12-11 14:48:28.979476] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid636109 ] 00:11:46.914 Attached to nqn.2016-06.io.spdk:cnode1 00:11:46.914 Namespace ID: 1 size: 1GB 00:11:46.914 fused_ordering(0) 00:11:46.914 fused_ordering(1) 00:11:46.914 fused_ordering(2) 00:11:46.914 fused_ordering(3) 00:11:46.914 fused_ordering(4) 00:11:46.914 fused_ordering(5) 00:11:46.914 fused_ordering(6) 00:11:46.914 fused_ordering(7) 00:11:46.914 fused_ordering(8) 00:11:46.914 fused_ordering(9) 00:11:46.914 fused_ordering(10) 00:11:46.914 fused_ordering(11) 00:11:46.914 fused_ordering(12) 00:11:46.914 fused_ordering(13) 00:11:46.914 fused_ordering(14) 00:11:46.914 fused_ordering(15) 00:11:46.914 fused_ordering(16) 00:11:46.914 fused_ordering(17) 00:11:46.914 fused_ordering(18) 00:11:46.914 fused_ordering(19) 00:11:46.915 fused_ordering(20) 00:11:46.915 fused_ordering(21) 00:11:46.915 fused_ordering(22) 00:11:46.915 fused_ordering(23) 00:11:46.915 fused_ordering(24) 00:11:46.915 fused_ordering(25) 00:11:46.915 fused_ordering(26) 00:11:46.915 fused_ordering(27) 00:11:46.915 fused_ordering(28) 00:11:46.915 fused_ordering(29) 00:11:46.915 fused_ordering(30) 00:11:46.915 fused_ordering(31) 00:11:46.915 fused_ordering(32) 00:11:46.915 fused_ordering(33) 00:11:46.915 fused_ordering(34) 00:11:46.915 fused_ordering(35) 00:11:46.915 fused_ordering(36) 00:11:46.915 fused_ordering(37) 00:11:46.915 fused_ordering(38) 00:11:46.915 fused_ordering(39) 00:11:46.915 fused_ordering(40) 00:11:46.915 fused_ordering(41) 00:11:46.915 fused_ordering(42) 00:11:46.915 fused_ordering(43) 00:11:46.915 fused_ordering(44) 00:11:46.915 fused_ordering(45) 00:11:46.915 fused_ordering(46) 00:11:46.915 fused_ordering(47) 00:11:46.915 fused_ordering(48) 00:11:46.915 fused_ordering(49) 00:11:46.915 fused_ordering(50) 00:11:46.915 fused_ordering(51) 00:11:46.915 fused_ordering(52) 00:11:46.915 fused_ordering(53) 00:11:46.915 fused_ordering(54) 00:11:46.915 fused_ordering(55) 00:11:46.915 fused_ordering(56) 00:11:46.915 fused_ordering(57) 00:11:46.915 fused_ordering(58) 00:11:46.915 fused_ordering(59) 00:11:46.915 fused_ordering(60) 00:11:46.915 fused_ordering(61) 00:11:46.915 fused_ordering(62) 00:11:46.915 fused_ordering(63) 00:11:46.915 fused_ordering(64) 00:11:46.915 fused_ordering(65) 00:11:46.915 fused_ordering(66) 00:11:46.915 fused_ordering(67) 00:11:46.915 fused_ordering(68) 00:11:46.915 fused_ordering(69) 00:11:46.915 fused_ordering(70) 00:11:46.915 fused_ordering(71) 00:11:46.915 fused_ordering(72) 00:11:46.915 fused_ordering(73) 00:11:46.915 fused_ordering(74) 00:11:46.915 fused_ordering(75) 00:11:46.915 fused_ordering(76) 00:11:46.915 fused_ordering(77) 00:11:46.915 fused_ordering(78) 00:11:46.915 fused_ordering(79) 00:11:46.915 fused_ordering(80) 00:11:46.915 fused_ordering(81) 00:11:46.915 fused_ordering(82) 00:11:46.915 fused_ordering(83) 00:11:46.915 fused_ordering(84) 00:11:46.915 fused_ordering(85) 00:11:46.915 fused_ordering(86) 00:11:46.915 fused_ordering(87) 00:11:46.915 fused_ordering(88) 00:11:46.915 fused_ordering(89) 00:11:46.915 fused_ordering(90) 00:11:46.915 fused_ordering(91) 00:11:46.915 fused_ordering(92) 00:11:46.915 fused_ordering(93) 00:11:46.915 fused_ordering(94) 00:11:46.915 fused_ordering(95) 00:11:46.915 fused_ordering(96) 00:11:46.915 fused_ordering(97) 00:11:46.915 fused_ordering(98) 00:11:46.915 fused_ordering(99) 00:11:46.915 fused_ordering(100) 00:11:46.915 fused_ordering(101) 00:11:46.915 fused_ordering(102) 00:11:46.915 fused_ordering(103) 00:11:46.915 fused_ordering(104) 00:11:46.915 fused_ordering(105) 00:11:46.915 fused_ordering(106) 00:11:46.915 fused_ordering(107) 00:11:46.915 fused_ordering(108) 00:11:46.915 fused_ordering(109) 00:11:46.915 fused_ordering(110) 00:11:46.915 fused_ordering(111) 00:11:46.915 fused_ordering(112) 00:11:46.915 fused_ordering(113) 00:11:46.915 fused_ordering(114) 00:11:46.915 fused_ordering(115) 00:11:46.915 fused_ordering(116) 00:11:46.915 fused_ordering(117) 00:11:46.915 fused_ordering(118) 00:11:46.915 fused_ordering(119) 00:11:46.915 fused_ordering(120) 00:11:46.915 fused_ordering(121) 00:11:46.915 fused_ordering(122) 00:11:46.915 fused_ordering(123) 00:11:46.915 fused_ordering(124) 00:11:46.915 fused_ordering(125) 00:11:46.915 fused_ordering(126) 00:11:46.915 fused_ordering(127) 00:11:46.915 fused_ordering(128) 00:11:46.915 fused_ordering(129) 00:11:46.915 fused_ordering(130) 00:11:46.915 fused_ordering(131) 00:11:46.915 fused_ordering(132) 00:11:46.915 fused_ordering(133) 00:11:46.915 fused_ordering(134) 00:11:46.915 fused_ordering(135) 00:11:46.915 fused_ordering(136) 00:11:46.915 fused_ordering(137) 00:11:46.915 fused_ordering(138) 00:11:46.915 fused_ordering(139) 00:11:46.915 fused_ordering(140) 00:11:46.915 fused_ordering(141) 00:11:46.915 fused_ordering(142) 00:11:46.915 fused_ordering(143) 00:11:46.915 fused_ordering(144) 00:11:46.915 fused_ordering(145) 00:11:46.915 fused_ordering(146) 00:11:46.915 fused_ordering(147) 00:11:46.915 fused_ordering(148) 00:11:46.915 fused_ordering(149) 00:11:46.915 fused_ordering(150) 00:11:46.915 fused_ordering(151) 00:11:46.915 fused_ordering(152) 00:11:46.915 fused_ordering(153) 00:11:46.915 fused_ordering(154) 00:11:46.915 fused_ordering(155) 00:11:46.915 fused_ordering(156) 00:11:46.915 fused_ordering(157) 00:11:46.915 fused_ordering(158) 00:11:46.915 fused_ordering(159) 00:11:46.915 fused_ordering(160) 00:11:46.915 fused_ordering(161) 00:11:46.915 fused_ordering(162) 00:11:46.915 fused_ordering(163) 00:11:46.915 fused_ordering(164) 00:11:46.915 fused_ordering(165) 00:11:46.915 fused_ordering(166) 00:11:46.915 fused_ordering(167) 00:11:46.915 fused_ordering(168) 00:11:46.915 fused_ordering(169) 00:11:46.915 fused_ordering(170) 00:11:46.915 fused_ordering(171) 00:11:46.915 fused_ordering(172) 00:11:46.915 fused_ordering(173) 00:11:46.915 fused_ordering(174) 00:11:46.915 fused_ordering(175) 00:11:46.915 fused_ordering(176) 00:11:46.915 fused_ordering(177) 00:11:46.915 fused_ordering(178) 00:11:46.915 fused_ordering(179) 00:11:46.915 fused_ordering(180) 00:11:46.915 fused_ordering(181) 00:11:46.915 fused_ordering(182) 00:11:46.915 fused_ordering(183) 00:11:46.915 fused_ordering(184) 00:11:46.915 fused_ordering(185) 00:11:46.915 fused_ordering(186) 00:11:46.915 fused_ordering(187) 00:11:46.915 fused_ordering(188) 00:11:46.915 fused_ordering(189) 00:11:46.915 fused_ordering(190) 00:11:46.915 fused_ordering(191) 00:11:46.915 fused_ordering(192) 00:11:46.915 fused_ordering(193) 00:11:46.915 fused_ordering(194) 00:11:46.915 fused_ordering(195) 00:11:46.915 fused_ordering(196) 00:11:46.915 fused_ordering(197) 00:11:46.915 fused_ordering(198) 00:11:46.915 fused_ordering(199) 00:11:46.915 fused_ordering(200) 00:11:46.915 fused_ordering(201) 00:11:46.915 fused_ordering(202) 00:11:46.915 fused_ordering(203) 00:11:46.915 fused_ordering(204) 00:11:46.915 fused_ordering(205) 00:11:47.173 fused_ordering(206) 00:11:47.173 fused_ordering(207) 00:11:47.173 fused_ordering(208) 00:11:47.173 fused_ordering(209) 00:11:47.173 fused_ordering(210) 00:11:47.173 fused_ordering(211) 00:11:47.173 fused_ordering(212) 00:11:47.173 fused_ordering(213) 00:11:47.173 fused_ordering(214) 00:11:47.173 fused_ordering(215) 00:11:47.173 fused_ordering(216) 00:11:47.173 fused_ordering(217) 00:11:47.173 fused_ordering(218) 00:11:47.173 fused_ordering(219) 00:11:47.173 fused_ordering(220) 00:11:47.173 fused_ordering(221) 00:11:47.173 fused_ordering(222) 00:11:47.173 fused_ordering(223) 00:11:47.173 fused_ordering(224) 00:11:47.173 fused_ordering(225) 00:11:47.173 fused_ordering(226) 00:11:47.173 fused_ordering(227) 00:11:47.173 fused_ordering(228) 00:11:47.173 fused_ordering(229) 00:11:47.173 fused_ordering(230) 00:11:47.173 fused_ordering(231) 00:11:47.173 fused_ordering(232) 00:11:47.173 fused_ordering(233) 00:11:47.173 fused_ordering(234) 00:11:47.173 fused_ordering(235) 00:11:47.173 fused_ordering(236) 00:11:47.173 fused_ordering(237) 00:11:47.173 fused_ordering(238) 00:11:47.173 fused_ordering(239) 00:11:47.173 fused_ordering(240) 00:11:47.173 fused_ordering(241) 00:11:47.173 fused_ordering(242) 00:11:47.173 fused_ordering(243) 00:11:47.173 fused_ordering(244) 00:11:47.173 fused_ordering(245) 00:11:47.173 fused_ordering(246) 00:11:47.173 fused_ordering(247) 00:11:47.173 fused_ordering(248) 00:11:47.173 fused_ordering(249) 00:11:47.173 fused_ordering(250) 00:11:47.173 fused_ordering(251) 00:11:47.173 fused_ordering(252) 00:11:47.173 fused_ordering(253) 00:11:47.173 fused_ordering(254) 00:11:47.173 fused_ordering(255) 00:11:47.173 fused_ordering(256) 00:11:47.173 fused_ordering(257) 00:11:47.173 fused_ordering(258) 00:11:47.173 fused_ordering(259) 00:11:47.173 fused_ordering(260) 00:11:47.173 fused_ordering(261) 00:11:47.173 fused_ordering(262) 00:11:47.173 fused_ordering(263) 00:11:47.173 fused_ordering(264) 00:11:47.173 fused_ordering(265) 00:11:47.173 fused_ordering(266) 00:11:47.173 fused_ordering(267) 00:11:47.173 fused_ordering(268) 00:11:47.173 fused_ordering(269) 00:11:47.173 fused_ordering(270) 00:11:47.173 fused_ordering(271) 00:11:47.173 fused_ordering(272) 00:11:47.173 fused_ordering(273) 00:11:47.173 fused_ordering(274) 00:11:47.173 fused_ordering(275) 00:11:47.173 fused_ordering(276) 00:11:47.173 fused_ordering(277) 00:11:47.173 fused_ordering(278) 00:11:47.173 fused_ordering(279) 00:11:47.173 fused_ordering(280) 00:11:47.173 fused_ordering(281) 00:11:47.173 fused_ordering(282) 00:11:47.173 fused_ordering(283) 00:11:47.173 fused_ordering(284) 00:11:47.173 fused_ordering(285) 00:11:47.173 fused_ordering(286) 00:11:47.173 fused_ordering(287) 00:11:47.173 fused_ordering(288) 00:11:47.173 fused_ordering(289) 00:11:47.173 fused_ordering(290) 00:11:47.173 fused_ordering(291) 00:11:47.173 fused_ordering(292) 00:11:47.173 fused_ordering(293) 00:11:47.173 fused_ordering(294) 00:11:47.173 fused_ordering(295) 00:11:47.173 fused_ordering(296) 00:11:47.173 fused_ordering(297) 00:11:47.173 fused_ordering(298) 00:11:47.173 fused_ordering(299) 00:11:47.173 fused_ordering(300) 00:11:47.173 fused_ordering(301) 00:11:47.173 fused_ordering(302) 00:11:47.173 fused_ordering(303) 00:11:47.173 fused_ordering(304) 00:11:47.173 fused_ordering(305) 00:11:47.173 fused_ordering(306) 00:11:47.173 fused_ordering(307) 00:11:47.173 fused_ordering(308) 00:11:47.173 fused_ordering(309) 00:11:47.173 fused_ordering(310) 00:11:47.173 fused_ordering(311) 00:11:47.173 fused_ordering(312) 00:11:47.173 fused_ordering(313) 00:11:47.173 fused_ordering(314) 00:11:47.173 fused_ordering(315) 00:11:47.173 fused_ordering(316) 00:11:47.173 fused_ordering(317) 00:11:47.173 fused_ordering(318) 00:11:47.173 fused_ordering(319) 00:11:47.173 fused_ordering(320) 00:11:47.173 fused_ordering(321) 00:11:47.173 fused_ordering(322) 00:11:47.173 fused_ordering(323) 00:11:47.173 fused_ordering(324) 00:11:47.173 fused_ordering(325) 00:11:47.173 fused_ordering(326) 00:11:47.173 fused_ordering(327) 00:11:47.173 fused_ordering(328) 00:11:47.173 fused_ordering(329) 00:11:47.173 fused_ordering(330) 00:11:47.173 fused_ordering(331) 00:11:47.173 fused_ordering(332) 00:11:47.173 fused_ordering(333) 00:11:47.173 fused_ordering(334) 00:11:47.173 fused_ordering(335) 00:11:47.173 fused_ordering(336) 00:11:47.173 fused_ordering(337) 00:11:47.173 fused_ordering(338) 00:11:47.173 fused_ordering(339) 00:11:47.173 fused_ordering(340) 00:11:47.173 fused_ordering(341) 00:11:47.173 fused_ordering(342) 00:11:47.173 fused_ordering(343) 00:11:47.173 fused_ordering(344) 00:11:47.173 fused_ordering(345) 00:11:47.173 fused_ordering(346) 00:11:47.173 fused_ordering(347) 00:11:47.173 fused_ordering(348) 00:11:47.173 fused_ordering(349) 00:11:47.173 fused_ordering(350) 00:11:47.173 fused_ordering(351) 00:11:47.173 fused_ordering(352) 00:11:47.173 fused_ordering(353) 00:11:47.173 fused_ordering(354) 00:11:47.173 fused_ordering(355) 00:11:47.173 fused_ordering(356) 00:11:47.173 fused_ordering(357) 00:11:47.173 fused_ordering(358) 00:11:47.173 fused_ordering(359) 00:11:47.173 fused_ordering(360) 00:11:47.174 fused_ordering(361) 00:11:47.174 fused_ordering(362) 00:11:47.174 fused_ordering(363) 00:11:47.174 fused_ordering(364) 00:11:47.174 fused_ordering(365) 00:11:47.174 fused_ordering(366) 00:11:47.174 fused_ordering(367) 00:11:47.174 fused_ordering(368) 00:11:47.174 fused_ordering(369) 00:11:47.174 fused_ordering(370) 00:11:47.174 fused_ordering(371) 00:11:47.174 fused_ordering(372) 00:11:47.174 fused_ordering(373) 00:11:47.174 fused_ordering(374) 00:11:47.174 fused_ordering(375) 00:11:47.174 fused_ordering(376) 00:11:47.174 fused_ordering(377) 00:11:47.174 fused_ordering(378) 00:11:47.174 fused_ordering(379) 00:11:47.174 fused_ordering(380) 00:11:47.174 fused_ordering(381) 00:11:47.174 fused_ordering(382) 00:11:47.174 fused_ordering(383) 00:11:47.174 fused_ordering(384) 00:11:47.174 fused_ordering(385) 00:11:47.174 fused_ordering(386) 00:11:47.174 fused_ordering(387) 00:11:47.174 fused_ordering(388) 00:11:47.174 fused_ordering(389) 00:11:47.174 fused_ordering(390) 00:11:47.174 fused_ordering(391) 00:11:47.174 fused_ordering(392) 00:11:47.174 fused_ordering(393) 00:11:47.174 fused_ordering(394) 00:11:47.174 fused_ordering(395) 00:11:47.174 fused_ordering(396) 00:11:47.174 fused_ordering(397) 00:11:47.174 fused_ordering(398) 00:11:47.174 fused_ordering(399) 00:11:47.174 fused_ordering(400) 00:11:47.174 fused_ordering(401) 00:11:47.174 fused_ordering(402) 00:11:47.174 fused_ordering(403) 00:11:47.174 fused_ordering(404) 00:11:47.174 fused_ordering(405) 00:11:47.174 fused_ordering(406) 00:11:47.174 fused_ordering(407) 00:11:47.174 fused_ordering(408) 00:11:47.174 fused_ordering(409) 00:11:47.174 fused_ordering(410) 00:11:47.432 fused_ordering(411) 00:11:47.432 fused_ordering(412) 00:11:47.432 fused_ordering(413) 00:11:47.432 fused_ordering(414) 00:11:47.432 fused_ordering(415) 00:11:47.432 fused_ordering(416) 00:11:47.432 fused_ordering(417) 00:11:47.432 fused_ordering(418) 00:11:47.432 fused_ordering(419) 00:11:47.432 fused_ordering(420) 00:11:47.432 fused_ordering(421) 00:11:47.432 fused_ordering(422) 00:11:47.432 fused_ordering(423) 00:11:47.432 fused_ordering(424) 00:11:47.432 fused_ordering(425) 00:11:47.432 fused_ordering(426) 00:11:47.432 fused_ordering(427) 00:11:47.432 fused_ordering(428) 00:11:47.432 fused_ordering(429) 00:11:47.432 fused_ordering(430) 00:11:47.432 fused_ordering(431) 00:11:47.432 fused_ordering(432) 00:11:47.432 fused_ordering(433) 00:11:47.432 fused_ordering(434) 00:11:47.432 fused_ordering(435) 00:11:47.432 fused_ordering(436) 00:11:47.432 fused_ordering(437) 00:11:47.432 fused_ordering(438) 00:11:47.432 fused_ordering(439) 00:11:47.432 fused_ordering(440) 00:11:47.432 fused_ordering(441) 00:11:47.432 fused_ordering(442) 00:11:47.432 fused_ordering(443) 00:11:47.432 fused_ordering(444) 00:11:47.432 fused_ordering(445) 00:11:47.432 fused_ordering(446) 00:11:47.432 fused_ordering(447) 00:11:47.432 fused_ordering(448) 00:11:47.432 fused_ordering(449) 00:11:47.432 fused_ordering(450) 00:11:47.432 fused_ordering(451) 00:11:47.432 fused_ordering(452) 00:11:47.432 fused_ordering(453) 00:11:47.432 fused_ordering(454) 00:11:47.432 fused_ordering(455) 00:11:47.432 fused_ordering(456) 00:11:47.432 fused_ordering(457) 00:11:47.432 fused_ordering(458) 00:11:47.432 fused_ordering(459) 00:11:47.432 fused_ordering(460) 00:11:47.432 fused_ordering(461) 00:11:47.432 fused_ordering(462) 00:11:47.432 fused_ordering(463) 00:11:47.432 fused_ordering(464) 00:11:47.432 fused_ordering(465) 00:11:47.432 fused_ordering(466) 00:11:47.432 fused_ordering(467) 00:11:47.432 fused_ordering(468) 00:11:47.432 fused_ordering(469) 00:11:47.432 fused_ordering(470) 00:11:47.432 fused_ordering(471) 00:11:47.432 fused_ordering(472) 00:11:47.432 fused_ordering(473) 00:11:47.432 fused_ordering(474) 00:11:47.432 fused_ordering(475) 00:11:47.432 fused_ordering(476) 00:11:47.432 fused_ordering(477) 00:11:47.432 fused_ordering(478) 00:11:47.432 fused_ordering(479) 00:11:47.432 fused_ordering(480) 00:11:47.432 fused_ordering(481) 00:11:47.432 fused_ordering(482) 00:11:47.432 fused_ordering(483) 00:11:47.432 fused_ordering(484) 00:11:47.432 fused_ordering(485) 00:11:47.432 fused_ordering(486) 00:11:47.432 fused_ordering(487) 00:11:47.432 fused_ordering(488) 00:11:47.432 fused_ordering(489) 00:11:47.432 fused_ordering(490) 00:11:47.432 fused_ordering(491) 00:11:47.432 fused_ordering(492) 00:11:47.432 fused_ordering(493) 00:11:47.432 fused_ordering(494) 00:11:47.432 fused_ordering(495) 00:11:47.432 fused_ordering(496) 00:11:47.432 fused_ordering(497) 00:11:47.432 fused_ordering(498) 00:11:47.432 fused_ordering(499) 00:11:47.432 fused_ordering(500) 00:11:47.432 fused_ordering(501) 00:11:47.432 fused_ordering(502) 00:11:47.432 fused_ordering(503) 00:11:47.432 fused_ordering(504) 00:11:47.432 fused_ordering(505) 00:11:47.432 fused_ordering(506) 00:11:47.432 fused_ordering(507) 00:11:47.432 fused_ordering(508) 00:11:47.432 fused_ordering(509) 00:11:47.432 fused_ordering(510) 00:11:47.432 fused_ordering(511) 00:11:47.432 fused_ordering(512) 00:11:47.432 fused_ordering(513) 00:11:47.432 fused_ordering(514) 00:11:47.432 fused_ordering(515) 00:11:47.432 fused_ordering(516) 00:11:47.432 fused_ordering(517) 00:11:47.432 fused_ordering(518) 00:11:47.432 fused_ordering(519) 00:11:47.432 fused_ordering(520) 00:11:47.432 fused_ordering(521) 00:11:47.432 fused_ordering(522) 00:11:47.432 fused_ordering(523) 00:11:47.432 fused_ordering(524) 00:11:47.432 fused_ordering(525) 00:11:47.432 fused_ordering(526) 00:11:47.432 fused_ordering(527) 00:11:47.432 fused_ordering(528) 00:11:47.432 fused_ordering(529) 00:11:47.432 fused_ordering(530) 00:11:47.432 fused_ordering(531) 00:11:47.432 fused_ordering(532) 00:11:47.432 fused_ordering(533) 00:11:47.432 fused_ordering(534) 00:11:47.432 fused_ordering(535) 00:11:47.432 fused_ordering(536) 00:11:47.432 fused_ordering(537) 00:11:47.432 fused_ordering(538) 00:11:47.432 fused_ordering(539) 00:11:47.432 fused_ordering(540) 00:11:47.432 fused_ordering(541) 00:11:47.432 fused_ordering(542) 00:11:47.432 fused_ordering(543) 00:11:47.432 fused_ordering(544) 00:11:47.432 fused_ordering(545) 00:11:47.432 fused_ordering(546) 00:11:47.432 fused_ordering(547) 00:11:47.432 fused_ordering(548) 00:11:47.432 fused_ordering(549) 00:11:47.432 fused_ordering(550) 00:11:47.432 fused_ordering(551) 00:11:47.432 fused_ordering(552) 00:11:47.432 fused_ordering(553) 00:11:47.432 fused_ordering(554) 00:11:47.432 fused_ordering(555) 00:11:47.432 fused_ordering(556) 00:11:47.432 fused_ordering(557) 00:11:47.432 fused_ordering(558) 00:11:47.432 fused_ordering(559) 00:11:47.432 fused_ordering(560) 00:11:47.432 fused_ordering(561) 00:11:47.432 fused_ordering(562) 00:11:47.432 fused_ordering(563) 00:11:47.432 fused_ordering(564) 00:11:47.432 fused_ordering(565) 00:11:47.432 fused_ordering(566) 00:11:47.432 fused_ordering(567) 00:11:47.432 fused_ordering(568) 00:11:47.432 fused_ordering(569) 00:11:47.432 fused_ordering(570) 00:11:47.432 fused_ordering(571) 00:11:47.432 fused_ordering(572) 00:11:47.432 fused_ordering(573) 00:11:47.432 fused_ordering(574) 00:11:47.432 fused_ordering(575) 00:11:47.432 fused_ordering(576) 00:11:47.432 fused_ordering(577) 00:11:47.432 fused_ordering(578) 00:11:47.432 fused_ordering(579) 00:11:47.432 fused_ordering(580) 00:11:47.432 fused_ordering(581) 00:11:47.432 fused_ordering(582) 00:11:47.432 fused_ordering(583) 00:11:47.432 fused_ordering(584) 00:11:47.432 fused_ordering(585) 00:11:47.432 fused_ordering(586) 00:11:47.432 fused_ordering(587) 00:11:47.432 fused_ordering(588) 00:11:47.432 fused_ordering(589) 00:11:47.432 fused_ordering(590) 00:11:47.432 fused_ordering(591) 00:11:47.432 fused_ordering(592) 00:11:47.432 fused_ordering(593) 00:11:47.432 fused_ordering(594) 00:11:47.432 fused_ordering(595) 00:11:47.432 fused_ordering(596) 00:11:47.432 fused_ordering(597) 00:11:47.432 fused_ordering(598) 00:11:47.432 fused_ordering(599) 00:11:47.432 fused_ordering(600) 00:11:47.432 fused_ordering(601) 00:11:47.432 fused_ordering(602) 00:11:47.432 fused_ordering(603) 00:11:47.432 fused_ordering(604) 00:11:47.432 fused_ordering(605) 00:11:47.432 fused_ordering(606) 00:11:47.432 fused_ordering(607) 00:11:47.432 fused_ordering(608) 00:11:47.432 fused_ordering(609) 00:11:47.433 fused_ordering(610) 00:11:47.433 fused_ordering(611) 00:11:47.433 fused_ordering(612) 00:11:47.433 fused_ordering(613) 00:11:47.433 fused_ordering(614) 00:11:47.433 fused_ordering(615) 00:11:47.998 fused_ordering(616) 00:11:47.998 fused_ordering(617) 00:11:47.998 fused_ordering(618) 00:11:47.998 fused_ordering(619) 00:11:47.998 fused_ordering(620) 00:11:47.998 fused_ordering(621) 00:11:47.998 fused_ordering(622) 00:11:47.998 fused_ordering(623) 00:11:47.998 fused_ordering(624) 00:11:47.998 fused_ordering(625) 00:11:47.998 fused_ordering(626) 00:11:47.998 fused_ordering(627) 00:11:47.998 fused_ordering(628) 00:11:47.998 fused_ordering(629) 00:11:47.998 fused_ordering(630) 00:11:47.998 fused_ordering(631) 00:11:47.998 fused_ordering(632) 00:11:47.998 fused_ordering(633) 00:11:47.998 fused_ordering(634) 00:11:47.998 fused_ordering(635) 00:11:47.998 fused_ordering(636) 00:11:47.998 fused_ordering(637) 00:11:47.998 fused_ordering(638) 00:11:47.998 fused_ordering(639) 00:11:47.998 fused_ordering(640) 00:11:47.998 fused_ordering(641) 00:11:47.998 fused_ordering(642) 00:11:47.998 fused_ordering(643) 00:11:47.998 fused_ordering(644) 00:11:47.998 fused_ordering(645) 00:11:47.998 fused_ordering(646) 00:11:47.998 fused_ordering(647) 00:11:47.998 fused_ordering(648) 00:11:47.998 fused_ordering(649) 00:11:47.998 fused_ordering(650) 00:11:47.998 fused_ordering(651) 00:11:47.998 fused_ordering(652) 00:11:47.998 fused_ordering(653) 00:11:47.998 fused_ordering(654) 00:11:47.998 fused_ordering(655) 00:11:47.998 fused_ordering(656) 00:11:47.998 fused_ordering(657) 00:11:47.998 fused_ordering(658) 00:11:47.998 fused_ordering(659) 00:11:47.998 fused_ordering(660) 00:11:47.998 fused_ordering(661) 00:11:47.998 fused_ordering(662) 00:11:47.998 fused_ordering(663) 00:11:47.998 fused_ordering(664) 00:11:47.998 fused_ordering(665) 00:11:47.998 fused_ordering(666) 00:11:47.998 fused_ordering(667) 00:11:47.998 fused_ordering(668) 00:11:47.998 fused_ordering(669) 00:11:47.998 fused_ordering(670) 00:11:47.998 fused_ordering(671) 00:11:47.998 fused_ordering(672) 00:11:47.998 fused_ordering(673) 00:11:47.998 fused_ordering(674) 00:11:47.998 fused_ordering(675) 00:11:47.998 fused_ordering(676) 00:11:47.998 fused_ordering(677) 00:11:47.998 fused_ordering(678) 00:11:47.998 fused_ordering(679) 00:11:47.998 fused_ordering(680) 00:11:47.998 fused_ordering(681) 00:11:47.998 fused_ordering(682) 00:11:47.998 fused_ordering(683) 00:11:47.998 fused_ordering(684) 00:11:47.998 fused_ordering(685) 00:11:47.998 fused_ordering(686) 00:11:47.998 fused_ordering(687) 00:11:47.998 fused_ordering(688) 00:11:47.998 fused_ordering(689) 00:11:47.998 fused_ordering(690) 00:11:47.998 fused_ordering(691) 00:11:47.998 fused_ordering(692) 00:11:47.998 fused_ordering(693) 00:11:47.998 fused_ordering(694) 00:11:47.998 fused_ordering(695) 00:11:47.998 fused_ordering(696) 00:11:47.998 fused_ordering(697) 00:11:47.998 fused_ordering(698) 00:11:47.998 fused_ordering(699) 00:11:47.998 fused_ordering(700) 00:11:47.998 fused_ordering(701) 00:11:47.998 fused_ordering(702) 00:11:47.998 fused_ordering(703) 00:11:47.998 fused_ordering(704) 00:11:47.998 fused_ordering(705) 00:11:47.998 fused_ordering(706) 00:11:47.998 fused_ordering(707) 00:11:47.998 fused_ordering(708) 00:11:47.998 fused_ordering(709) 00:11:47.998 fused_ordering(710) 00:11:47.998 fused_ordering(711) 00:11:47.998 fused_ordering(712) 00:11:47.998 fused_ordering(713) 00:11:47.998 fused_ordering(714) 00:11:47.998 fused_ordering(715) 00:11:47.998 fused_ordering(716) 00:11:47.998 fused_ordering(717) 00:11:47.998 fused_ordering(718) 00:11:47.998 fused_ordering(719) 00:11:47.998 fused_ordering(720) 00:11:47.998 fused_ordering(721) 00:11:47.998 fused_ordering(722) 00:11:47.998 fused_ordering(723) 00:11:47.998 fused_ordering(724) 00:11:47.998 fused_ordering(725) 00:11:47.998 fused_ordering(726) 00:11:47.998 fused_ordering(727) 00:11:47.998 fused_ordering(728) 00:11:47.998 fused_ordering(729) 00:11:47.998 fused_ordering(730) 00:11:47.998 fused_ordering(731) 00:11:47.998 fused_ordering(732) 00:11:47.998 fused_ordering(733) 00:11:47.998 fused_ordering(734) 00:11:47.998 fused_ordering(735) 00:11:47.998 fused_ordering(736) 00:11:47.998 fused_ordering(737) 00:11:47.998 fused_ordering(738) 00:11:47.998 fused_ordering(739) 00:11:47.998 fused_ordering(740) 00:11:47.998 fused_ordering(741) 00:11:47.998 fused_ordering(742) 00:11:47.998 fused_ordering(743) 00:11:47.998 fused_ordering(744) 00:11:47.998 fused_ordering(745) 00:11:47.998 fused_ordering(746) 00:11:47.998 fused_ordering(747) 00:11:47.998 fused_ordering(748) 00:11:47.998 fused_ordering(749) 00:11:47.998 fused_ordering(750) 00:11:47.998 fused_ordering(751) 00:11:47.998 fused_ordering(752) 00:11:47.998 fused_ordering(753) 00:11:47.998 fused_ordering(754) 00:11:47.998 fused_ordering(755) 00:11:47.998 fused_ordering(756) 00:11:47.998 fused_ordering(757) 00:11:47.998 fused_ordering(758) 00:11:47.998 fused_ordering(759) 00:11:47.998 fused_ordering(760) 00:11:47.998 fused_ordering(761) 00:11:47.998 fused_ordering(762) 00:11:47.998 fused_ordering(763) 00:11:47.998 fused_ordering(764) 00:11:47.998 fused_ordering(765) 00:11:47.998 fused_ordering(766) 00:11:47.998 fused_ordering(767) 00:11:47.998 fused_ordering(768) 00:11:47.998 fused_ordering(769) 00:11:47.998 fused_ordering(770) 00:11:47.998 fused_ordering(771) 00:11:47.998 fused_ordering(772) 00:11:47.998 fused_ordering(773) 00:11:47.998 fused_ordering(774) 00:11:47.998 fused_ordering(775) 00:11:47.998 fused_ordering(776) 00:11:47.998 fused_ordering(777) 00:11:47.998 fused_ordering(778) 00:11:47.998 fused_ordering(779) 00:11:47.998 fused_ordering(780) 00:11:47.998 fused_ordering(781) 00:11:47.998 fused_ordering(782) 00:11:47.998 fused_ordering(783) 00:11:47.998 fused_ordering(784) 00:11:47.998 fused_ordering(785) 00:11:47.998 fused_ordering(786) 00:11:47.998 fused_ordering(787) 00:11:47.998 fused_ordering(788) 00:11:47.998 fused_ordering(789) 00:11:47.998 fused_ordering(790) 00:11:47.998 fused_ordering(791) 00:11:47.998 fused_ordering(792) 00:11:47.998 fused_ordering(793) 00:11:47.998 fused_ordering(794) 00:11:47.998 fused_ordering(795) 00:11:47.998 fused_ordering(796) 00:11:47.998 fused_ordering(797) 00:11:47.998 fused_ordering(798) 00:11:47.998 fused_ordering(799) 00:11:47.998 fused_ordering(800) 00:11:47.998 fused_ordering(801) 00:11:47.998 fused_ordering(802) 00:11:47.998 fused_ordering(803) 00:11:47.998 fused_ordering(804) 00:11:47.998 fused_ordering(805) 00:11:47.998 fused_ordering(806) 00:11:47.998 fused_ordering(807) 00:11:47.998 fused_ordering(808) 00:11:47.998 fused_ordering(809) 00:11:47.998 fused_ordering(810) 00:11:47.998 fused_ordering(811) 00:11:47.998 fused_ordering(812) 00:11:47.998 fused_ordering(813) 00:11:47.998 fused_ordering(814) 00:11:47.998 fused_ordering(815) 00:11:47.998 fused_ordering(816) 00:11:47.999 fused_ordering(817) 00:11:47.999 fused_ordering(818) 00:11:47.999 fused_ordering(819) 00:11:47.999 fused_ordering(820) 00:11:48.564 fused_ordering(821) 00:11:48.564 fused_ordering(822) 00:11:48.564 fused_ordering(823) 00:11:48.564 fused_ordering(824) 00:11:48.564 fused_ordering(825) 00:11:48.564 fused_ordering(826) 00:11:48.564 fused_ordering(827) 00:11:48.564 fused_ordering(828) 00:11:48.564 fused_ordering(829) 00:11:48.564 fused_ordering(830) 00:11:48.564 fused_ordering(831) 00:11:48.564 fused_ordering(832) 00:11:48.564 fused_ordering(833) 00:11:48.564 fused_ordering(834) 00:11:48.564 fused_ordering(835) 00:11:48.564 fused_ordering(836) 00:11:48.564 fused_ordering(837) 00:11:48.564 fused_ordering(838) 00:11:48.564 fused_ordering(839) 00:11:48.564 fused_ordering(840) 00:11:48.564 fused_ordering(841) 00:11:48.564 fused_ordering(842) 00:11:48.564 fused_ordering(843) 00:11:48.564 fused_ordering(844) 00:11:48.564 fused_ordering(845) 00:11:48.564 fused_ordering(846) 00:11:48.564 fused_ordering(847) 00:11:48.564 fused_ordering(848) 00:11:48.564 fused_ordering(849) 00:11:48.564 fused_ordering(850) 00:11:48.564 fused_ordering(851) 00:11:48.564 fused_ordering(852) 00:11:48.564 fused_ordering(853) 00:11:48.564 fused_ordering(854) 00:11:48.564 fused_ordering(855) 00:11:48.564 fused_ordering(856) 00:11:48.564 fused_ordering(857) 00:11:48.564 fused_ordering(858) 00:11:48.564 fused_ordering(859) 00:11:48.564 fused_ordering(860) 00:11:48.564 fused_ordering(861) 00:11:48.564 fused_ordering(862) 00:11:48.564 fused_ordering(863) 00:11:48.564 fused_ordering(864) 00:11:48.564 fused_ordering(865) 00:11:48.564 fused_ordering(866) 00:11:48.564 fused_ordering(867) 00:11:48.564 fused_ordering(868) 00:11:48.564 fused_ordering(869) 00:11:48.564 fused_ordering(870) 00:11:48.564 fused_ordering(871) 00:11:48.564 fused_ordering(872) 00:11:48.564 fused_ordering(873) 00:11:48.564 fused_ordering(874) 00:11:48.564 fused_ordering(875) 00:11:48.564 fused_ordering(876) 00:11:48.564 fused_ordering(877) 00:11:48.564 fused_ordering(878) 00:11:48.564 fused_ordering(879) 00:11:48.564 fused_ordering(880) 00:11:48.564 fused_ordering(881) 00:11:48.564 fused_ordering(882) 00:11:48.564 fused_ordering(883) 00:11:48.564 fused_ordering(884) 00:11:48.564 fused_ordering(885) 00:11:48.564 fused_ordering(886) 00:11:48.564 fused_ordering(887) 00:11:48.564 fused_ordering(888) 00:11:48.564 fused_ordering(889) 00:11:48.564 fused_ordering(890) 00:11:48.564 fused_ordering(891) 00:11:48.564 fused_ordering(892) 00:11:48.564 fused_ordering(893) 00:11:48.564 fused_ordering(894) 00:11:48.564 fused_ordering(895) 00:11:48.564 fused_ordering(896) 00:11:48.564 fused_ordering(897) 00:11:48.564 fused_ordering(898) 00:11:48.564 fused_ordering(899) 00:11:48.564 fused_ordering(900) 00:11:48.564 fused_ordering(901) 00:11:48.564 fused_ordering(902) 00:11:48.564 fused_ordering(903) 00:11:48.564 fused_ordering(904) 00:11:48.564 fused_ordering(905) 00:11:48.564 fused_ordering(906) 00:11:48.564 fused_ordering(907) 00:11:48.564 fused_ordering(908) 00:11:48.564 fused_ordering(909) 00:11:48.564 fused_ordering(910) 00:11:48.564 fused_ordering(911) 00:11:48.564 fused_ordering(912) 00:11:48.564 fused_ordering(913) 00:11:48.564 fused_ordering(914) 00:11:48.564 fused_ordering(915) 00:11:48.564 fused_ordering(916) 00:11:48.564 fused_ordering(917) 00:11:48.564 fused_ordering(918) 00:11:48.564 fused_ordering(919) 00:11:48.564 fused_ordering(920) 00:11:48.564 fused_ordering(921) 00:11:48.564 fused_ordering(922) 00:11:48.564 fused_ordering(923) 00:11:48.564 fused_ordering(924) 00:11:48.564 fused_ordering(925) 00:11:48.564 fused_ordering(926) 00:11:48.564 fused_ordering(927) 00:11:48.564 fused_ordering(928) 00:11:48.564 fused_ordering(929) 00:11:48.564 fused_ordering(930) 00:11:48.564 fused_ordering(931) 00:11:48.564 fused_ordering(932) 00:11:48.564 fused_ordering(933) 00:11:48.564 fused_ordering(934) 00:11:48.564 fused_ordering(935) 00:11:48.564 fused_ordering(936) 00:11:48.564 fused_ordering(937) 00:11:48.564 fused_ordering(938) 00:11:48.564 fused_ordering(939) 00:11:48.564 fused_ordering(940) 00:11:48.564 fused_ordering(941) 00:11:48.564 fused_ordering(942) 00:11:48.564 fused_ordering(943) 00:11:48.564 fused_ordering(944) 00:11:48.564 fused_ordering(945) 00:11:48.564 fused_ordering(946) 00:11:48.564 fused_ordering(947) 00:11:48.564 fused_ordering(948) 00:11:48.564 fused_ordering(949) 00:11:48.564 fused_ordering(950) 00:11:48.564 fused_ordering(951) 00:11:48.564 fused_ordering(952) 00:11:48.564 fused_ordering(953) 00:11:48.564 fused_ordering(954) 00:11:48.564 fused_ordering(955) 00:11:48.564 fused_ordering(956) 00:11:48.564 fused_ordering(957) 00:11:48.564 fused_ordering(958) 00:11:48.564 fused_ordering(959) 00:11:48.564 fused_ordering(960) 00:11:48.564 fused_ordering(961) 00:11:48.564 fused_ordering(962) 00:11:48.564 fused_ordering(963) 00:11:48.564 fused_ordering(964) 00:11:48.564 fused_ordering(965) 00:11:48.564 fused_ordering(966) 00:11:48.564 fused_ordering(967) 00:11:48.564 fused_ordering(968) 00:11:48.564 fused_ordering(969) 00:11:48.564 fused_ordering(970) 00:11:48.565 fused_ordering(971) 00:11:48.565 fused_ordering(972) 00:11:48.565 fused_ordering(973) 00:11:48.565 fused_ordering(974) 00:11:48.565 fused_ordering(975) 00:11:48.565 fused_ordering(976) 00:11:48.565 fused_ordering(977) 00:11:48.565 fused_ordering(978) 00:11:48.565 fused_ordering(979) 00:11:48.565 fused_ordering(980) 00:11:48.565 fused_ordering(981) 00:11:48.565 fused_ordering(982) 00:11:48.565 fused_ordering(983) 00:11:48.565 fused_ordering(984) 00:11:48.565 fused_ordering(985) 00:11:48.565 fused_ordering(986) 00:11:48.565 fused_ordering(987) 00:11:48.565 fused_ordering(988) 00:11:48.565 fused_ordering(989) 00:11:48.565 fused_ordering(990) 00:11:48.565 fused_ordering(991) 00:11:48.565 fused_ordering(992) 00:11:48.565 fused_ordering(993) 00:11:48.565 fused_ordering(994) 00:11:48.565 fused_ordering(995) 00:11:48.565 fused_ordering(996) 00:11:48.565 fused_ordering(997) 00:11:48.565 fused_ordering(998) 00:11:48.565 fused_ordering(999) 00:11:48.565 fused_ordering(1000) 00:11:48.565 fused_ordering(1001) 00:11:48.565 fused_ordering(1002) 00:11:48.565 fused_ordering(1003) 00:11:48.565 fused_ordering(1004) 00:11:48.565 fused_ordering(1005) 00:11:48.565 fused_ordering(1006) 00:11:48.565 fused_ordering(1007) 00:11:48.565 fused_ordering(1008) 00:11:48.565 fused_ordering(1009) 00:11:48.565 fused_ordering(1010) 00:11:48.565 fused_ordering(1011) 00:11:48.565 fused_ordering(1012) 00:11:48.565 fused_ordering(1013) 00:11:48.565 fused_ordering(1014) 00:11:48.565 fused_ordering(1015) 00:11:48.565 fused_ordering(1016) 00:11:48.565 fused_ordering(1017) 00:11:48.565 fused_ordering(1018) 00:11:48.565 fused_ordering(1019) 00:11:48.565 fused_ordering(1020) 00:11:48.565 fused_ordering(1021) 00:11:48.565 fused_ordering(1022) 00:11:48.565 fused_ordering(1023) 00:11:48.565 14:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:11:48.565 14:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:11:48.565 14:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:48.565 14:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:11:48.565 14:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:48.565 14:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:11:48.565 14:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:48.565 14:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:48.565 rmmod nvme_tcp 00:11:48.565 rmmod nvme_fabrics 00:11:48.565 rmmod nvme_keyring 00:11:48.823 14:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:48.823 14:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:11:48.823 14:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:11:48.823 14:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 635971 ']' 00:11:48.823 14:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 635971 00:11:48.823 14:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 635971 ']' 00:11:48.823 14:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 635971 00:11:48.823 14:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:11:48.823 14:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:48.823 14:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 635971 00:11:48.823 14:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:48.823 14:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:48.823 14:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 635971' 00:11:48.823 killing process with pid 635971 00:11:48.823 14:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 635971 00:11:48.823 14:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 635971 00:11:49.082 14:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:49.082 14:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:49.082 14:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:49.082 14:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:11:49.082 14:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:11:49.082 14:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:49.082 14:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:11:49.082 14:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:49.082 14:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:49.082 14:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:49.082 14:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:49.082 14:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:50.988 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:50.988 00:11:50.988 real 0m7.497s 00:11:50.988 user 0m5.048s 00:11:50.988 sys 0m3.124s 00:11:50.988 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:50.988 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:50.988 ************************************ 00:11:50.988 END TEST nvmf_fused_ordering 00:11:50.988 ************************************ 00:11:50.988 14:48:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:11:50.988 14:48:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:50.988 14:48:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:50.988 14:48:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:50.988 ************************************ 00:11:50.988 START TEST nvmf_ns_masking 00:11:50.988 ************************************ 00:11:50.988 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:11:50.988 * Looking for test storage... 00:11:51.247 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:51.247 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:51.247 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lcov --version 00:11:51.247 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:51.247 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:51.247 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:51.247 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:51.247 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:51.247 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:11:51.247 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:11:51.247 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:11:51.247 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:11:51.247 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:11:51.247 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:11:51.247 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:11:51.247 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:51.247 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:11:51.247 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:11:51.247 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:51.247 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:51.247 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:11:51.247 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:11:51.247 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:51.247 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:11:51.247 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:11:51.247 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:11:51.247 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:11:51.247 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:51.247 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:11:51.247 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:11:51.247 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:51.247 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:51.247 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:11:51.247 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:51.247 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:51.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:51.247 --rc genhtml_branch_coverage=1 00:11:51.247 --rc genhtml_function_coverage=1 00:11:51.247 --rc genhtml_legend=1 00:11:51.247 --rc geninfo_all_blocks=1 00:11:51.247 --rc geninfo_unexecuted_blocks=1 00:11:51.247 00:11:51.247 ' 00:11:51.247 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:51.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:51.247 --rc genhtml_branch_coverage=1 00:11:51.247 --rc genhtml_function_coverage=1 00:11:51.247 --rc genhtml_legend=1 00:11:51.247 --rc geninfo_all_blocks=1 00:11:51.247 --rc geninfo_unexecuted_blocks=1 00:11:51.247 00:11:51.247 ' 00:11:51.247 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:51.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:51.247 --rc genhtml_branch_coverage=1 00:11:51.247 --rc genhtml_function_coverage=1 00:11:51.247 --rc genhtml_legend=1 00:11:51.247 --rc geninfo_all_blocks=1 00:11:51.247 --rc geninfo_unexecuted_blocks=1 00:11:51.247 00:11:51.247 ' 00:11:51.247 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:51.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:51.247 --rc genhtml_branch_coverage=1 00:11:51.247 --rc genhtml_function_coverage=1 00:11:51.247 --rc genhtml_legend=1 00:11:51.247 --rc geninfo_all_blocks=1 00:11:51.247 --rc geninfo_unexecuted_blocks=1 00:11:51.247 00:11:51.247 ' 00:11:51.247 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:51.247 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:11:51.247 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:51.247 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:51.247 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:51.247 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:51.247 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:51.247 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:51.247 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:51.247 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:51.247 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:51.247 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:51.247 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:51.247 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:51.247 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:51.247 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:51.247 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:51.247 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:51.247 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:51.247 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:11:51.247 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:51.247 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:51.247 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:51.248 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.248 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.248 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.248 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:11:51.248 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.248 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:11:51.248 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:51.248 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:51.248 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:51.248 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:51.248 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:51.248 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:51.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:51.248 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:51.248 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:51.248 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:51.248 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:51.248 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:11:51.248 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:11:51.248 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:11:51.248 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=4f86b5c9-2e23-4fcd-84c3-86b9026952d1 00:11:51.248 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:11:51.248 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=13288272-272c-4d16-bf9e-2406f80049fd 00:11:51.248 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:11:51.248 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:11:51.248 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:11:51.248 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:11:51.248 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=3d45e479-2681-48a9-aed8-47b4320331cb 00:11:51.248 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:11:51.248 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:51.248 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:51.248 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:51.248 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:51.248 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:51.248 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:51.248 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:51.248 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:51.248 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:51.248 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:51.248 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:11:51.248 14:48:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:53.778 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:53.778 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:11:53.778 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:53.778 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:53.778 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:53.778 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:53.778 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:53.778 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:11:53.778 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:53.778 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:11:53.778 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:11:53.778 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:11:53.778 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:11:53.778 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:11:53.778 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:11:53.778 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:53.778 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:53.778 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:53.778 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:53.778 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:53.778 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:53.778 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:53.778 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:53.778 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:53.778 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:53.778 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:53.778 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:53.778 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:53.778 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:53.778 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:53.778 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:53.778 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:53.778 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:53.778 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:53.778 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:53.778 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:53.778 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:53.778 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:53.778 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:53.778 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:53.778 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:53.778 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:53.778 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:53.778 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:53.778 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:53.778 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:53.778 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:53.778 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:53.778 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:53.778 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:53.778 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:53.778 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:53.778 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:53.778 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:53.778 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:53.778 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:53.778 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:53.778 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:53.778 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:53.778 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:53.778 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:53.778 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:53.778 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:53.778 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:53.778 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:53.778 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:53.779 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:53.779 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:53.779 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:53.779 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:53.779 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:53.779 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:53.779 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:53.779 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:11:53.779 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:53.779 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:53.779 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:53.779 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:53.779 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:53.779 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:53.779 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:53.779 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:53.779 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:53.779 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:53.779 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:53.779 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:53.779 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:53.779 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:53.779 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:53.779 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:53.779 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:53.779 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:53.779 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:53.779 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:53.779 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:53.779 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:53.779 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:53.779 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:53.779 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:53.779 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:53.779 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:53.779 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.218 ms 00:11:53.779 00:11:53.779 --- 10.0.0.2 ping statistics --- 00:11:53.779 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:53.779 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:11:53.779 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:53.779 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:53.779 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:11:53.779 00:11:53.779 --- 10.0.0.1 ping statistics --- 00:11:53.779 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:53.779 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:11:53.779 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:53.779 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:11:53.779 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:53.779 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:53.779 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:53.779 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:53.779 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:53.779 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:53.779 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:53.779 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:11:53.779 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:53.779 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:53.779 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:53.779 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=638332 00:11:53.779 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:11:53.779 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 638332 00:11:53.779 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 638332 ']' 00:11:53.779 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:53.779 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:53.779 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:53.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:53.779 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:53.779 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:53.779 [2024-12-11 14:48:36.286140] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:11:53.779 [2024-12-11 14:48:36.286228] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:53.779 [2024-12-11 14:48:36.357059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:53.779 [2024-12-11 14:48:36.409587] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:53.779 [2024-12-11 14:48:36.409645] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:53.779 [2024-12-11 14:48:36.409674] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:53.779 [2024-12-11 14:48:36.409685] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:53.779 [2024-12-11 14:48:36.409695] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:53.779 [2024-12-11 14:48:36.410291] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:53.779 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:53.779 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:11:53.779 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:53.779 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:53.779 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:53.779 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:53.779 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:54.037 [2024-12-11 14:48:36.793708] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:54.294 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:11:54.294 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:11:54.295 14:48:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:54.553 Malloc1 00:11:54.553 14:48:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:54.811 Malloc2 00:11:54.811 14:48:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:55.069 14:48:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:11:55.326 14:48:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:55.584 [2024-12-11 14:48:38.190870] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:55.584 14:48:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:11:55.584 14:48:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 3d45e479-2681-48a9-aed8-47b4320331cb -a 10.0.0.2 -s 4420 -i 4 00:11:55.584 14:48:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:11:55.584 14:48:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:11:55.584 14:48:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:55.584 14:48:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:55.584 14:48:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:11:58.111 14:48:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:58.111 14:48:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:58.111 14:48:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:58.111 14:48:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:58.111 14:48:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:58.111 14:48:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:11:58.111 14:48:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:58.111 14:48:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:58.111 14:48:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:58.111 14:48:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:58.111 14:48:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:11:58.111 14:48:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:58.111 14:48:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:58.111 [ 0]:0x1 00:11:58.111 14:48:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:58.111 14:48:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:58.111 14:48:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=bc2fcd114d21423ba68d9cbdf84e3c56 00:11:58.111 14:48:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ bc2fcd114d21423ba68d9cbdf84e3c56 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:58.111 14:48:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:11:58.111 14:48:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:11:58.111 14:48:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:58.111 14:48:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:58.111 [ 0]:0x1 00:11:58.111 14:48:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:58.111 14:48:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:58.111 14:48:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=bc2fcd114d21423ba68d9cbdf84e3c56 00:11:58.111 14:48:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ bc2fcd114d21423ba68d9cbdf84e3c56 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:58.111 14:48:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:11:58.111 14:48:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:58.111 14:48:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:58.111 [ 1]:0x2 00:11:58.111 14:48:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:58.111 14:48:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:58.369 14:48:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a155cd94405340c081ce57978b33c723 00:11:58.369 14:48:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a155cd94405340c081ce57978b33c723 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:58.369 14:48:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:11:58.369 14:48:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:58.369 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:58.369 14:48:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:58.627 14:48:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:11:58.885 14:48:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:11:58.885 14:48:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 3d45e479-2681-48a9-aed8-47b4320331cb -a 10.0.0.2 -s 4420 -i 4 00:11:59.143 14:48:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:11:59.143 14:48:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:11:59.143 14:48:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:59.143 14:48:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:11:59.143 14:48:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:11:59.143 14:48:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:12:01.059 14:48:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:01.059 14:48:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:01.059 14:48:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:01.059 14:48:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:01.059 14:48:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:01.059 14:48:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:12:01.059 14:48:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:01.059 14:48:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:01.059 14:48:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:01.059 14:48:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:01.059 14:48:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:12:01.059 14:48:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:01.059 14:48:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:12:01.059 14:48:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:12:01.059 14:48:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:01.059 14:48:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:12:01.059 14:48:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:01.059 14:48:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:12:01.059 14:48:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:01.059 14:48:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:01.059 14:48:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:01.059 14:48:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:01.059 14:48:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:01.059 14:48:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:01.059 14:48:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:01.059 14:48:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:01.059 14:48:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:01.059 14:48:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:01.059 14:48:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:12:01.059 14:48:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:01.059 14:48:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:01.059 [ 0]:0x2 00:12:01.059 14:48:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:01.059 14:48:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:01.318 14:48:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a155cd94405340c081ce57978b33c723 00:12:01.318 14:48:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a155cd94405340c081ce57978b33c723 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:01.318 14:48:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:01.576 14:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:12:01.576 14:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:01.576 14:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:01.576 [ 0]:0x1 00:12:01.576 14:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:01.576 14:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:01.576 14:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=bc2fcd114d21423ba68d9cbdf84e3c56 00:12:01.576 14:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ bc2fcd114d21423ba68d9cbdf84e3c56 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:01.576 14:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:12:01.576 14:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:01.576 14:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:01.576 [ 1]:0x2 00:12:01.576 14:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:01.576 14:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:01.576 14:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a155cd94405340c081ce57978b33c723 00:12:01.576 14:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a155cd94405340c081ce57978b33c723 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:01.576 14:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:01.834 14:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:12:01.834 14:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:01.834 14:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:12:01.834 14:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:12:01.834 14:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:01.834 14:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:12:01.834 14:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:01.834 14:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:12:01.834 14:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:01.834 14:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:01.834 14:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:01.834 14:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:01.834 14:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:01.834 14:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:01.834 14:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:01.834 14:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:01.834 14:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:01.834 14:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:01.834 14:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:12:01.834 14:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:01.834 14:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:01.834 [ 0]:0x2 00:12:01.834 14:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:01.834 14:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:01.834 14:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a155cd94405340c081ce57978b33c723 00:12:01.834 14:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a155cd94405340c081ce57978b33c723 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:01.834 14:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:12:01.834 14:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:02.092 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:02.092 14:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:02.350 14:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:12:02.350 14:48:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 3d45e479-2681-48a9-aed8-47b4320331cb -a 10.0.0.2 -s 4420 -i 4 00:12:02.610 14:48:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:02.610 14:48:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:12:02.610 14:48:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:02.610 14:48:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:12:02.610 14:48:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:12:02.610 14:48:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:12:04.508 14:48:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:04.508 14:48:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:04.508 14:48:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:04.508 14:48:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:12:04.508 14:48:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:04.508 14:48:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:12:04.508 14:48:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:04.508 14:48:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:04.508 14:48:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:04.508 14:48:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:04.508 14:48:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:12:04.508 14:48:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:04.508 14:48:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:04.508 [ 0]:0x1 00:12:04.508 14:48:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:04.508 14:48:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:04.508 14:48:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=bc2fcd114d21423ba68d9cbdf84e3c56 00:12:04.508 14:48:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ bc2fcd114d21423ba68d9cbdf84e3c56 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:04.508 14:48:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:12:04.508 14:48:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:04.508 14:48:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:04.508 [ 1]:0x2 00:12:04.508 14:48:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:04.508 14:48:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:04.765 14:48:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a155cd94405340c081ce57978b33c723 00:12:04.765 14:48:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a155cd94405340c081ce57978b33c723 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:04.765 14:48:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:05.023 14:48:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:12:05.023 14:48:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:05.023 14:48:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:12:05.023 14:48:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:12:05.023 14:48:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:05.023 14:48:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:12:05.023 14:48:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:05.023 14:48:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:12:05.023 14:48:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:05.023 14:48:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:05.023 14:48:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:05.023 14:48:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:05.023 14:48:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:05.023 14:48:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:05.023 14:48:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:05.023 14:48:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:05.023 14:48:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:05.023 14:48:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:05.023 14:48:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:12:05.023 14:48:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:05.023 14:48:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:05.023 [ 0]:0x2 00:12:05.023 14:48:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:05.023 14:48:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:05.023 14:48:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a155cd94405340c081ce57978b33c723 00:12:05.023 14:48:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a155cd94405340c081ce57978b33c723 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:05.023 14:48:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:05.023 14:48:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:05.023 14:48:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:05.023 14:48:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:05.023 14:48:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:05.023 14:48:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:05.023 14:48:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:05.023 14:48:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:05.024 14:48:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:05.024 14:48:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:05.024 14:48:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:05.024 14:48:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:05.281 [2024-12-11 14:48:48.019976] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:12:05.281 request: 00:12:05.281 { 00:12:05.281 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:05.281 "nsid": 2, 00:12:05.282 "host": "nqn.2016-06.io.spdk:host1", 00:12:05.282 "method": "nvmf_ns_remove_host", 00:12:05.282 "req_id": 1 00:12:05.282 } 00:12:05.282 Got JSON-RPC error response 00:12:05.282 response: 00:12:05.282 { 00:12:05.282 "code": -32602, 00:12:05.282 "message": "Invalid parameters" 00:12:05.282 } 00:12:05.282 14:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:05.282 14:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:05.282 14:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:05.282 14:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:05.282 14:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:12:05.282 14:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:05.282 14:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:12:05.282 14:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:12:05.282 14:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:05.282 14:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:12:05.282 14:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:05.282 14:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:12:05.282 14:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:05.282 14:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:05.282 14:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:05.282 14:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:05.540 14:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:05.540 14:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:05.540 14:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:05.540 14:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:05.540 14:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:05.540 14:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:05.540 14:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:12:05.540 14:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:05.540 14:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:05.540 [ 0]:0x2 00:12:05.540 14:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:05.540 14:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:05.540 14:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a155cd94405340c081ce57978b33c723 00:12:05.540 14:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a155cd94405340c081ce57978b33c723 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:05.540 14:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:12:05.540 14:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:05.798 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:05.798 14:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=639950 00:12:05.798 14:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:12:05.798 14:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:12:05.798 14:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 639950 /var/tmp/host.sock 00:12:05.798 14:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 639950 ']' 00:12:05.798 14:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:12:05.798 14:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:05.798 14:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:12:05.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:12:05.798 14:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:05.798 14:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:05.798 [2024-12-11 14:48:48.372604] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:12:05.798 [2024-12-11 14:48:48.372686] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid639950 ] 00:12:05.798 [2024-12-11 14:48:48.439258] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:05.798 [2024-12-11 14:48:48.495424] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:12:06.056 14:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:06.056 14:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:12:06.056 14:48:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:06.314 14:48:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:06.571 14:48:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 4f86b5c9-2e23-4fcd-84c3-86b9026952d1 00:12:06.571 14:48:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:12:06.571 14:48:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 4F86B5C92E234FCD84C386B9026952D1 -i 00:12:06.828 14:48:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 13288272-272c-4d16-bf9e-2406f80049fd 00:12:06.828 14:48:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:12:06.828 14:48:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 13288272272C4D16BF9E2406F80049FD -i 00:12:07.393 14:48:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:07.393 14:48:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:12:07.651 14:48:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:07.651 14:48:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:08.218 nvme0n1 00:12:08.218 14:48:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:08.218 14:48:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:08.476 nvme1n2 00:12:08.476 14:48:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:12:08.476 14:48:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:12:08.476 14:48:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:12:08.476 14:48:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:12:08.476 14:48:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:12:08.734 14:48:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:12:08.734 14:48:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:12:08.734 14:48:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:12:08.734 14:48:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:12:08.991 14:48:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 4f86b5c9-2e23-4fcd-84c3-86b9026952d1 == \4\f\8\6\b\5\c\9\-\2\e\2\3\-\4\f\c\d\-\8\4\c\3\-\8\6\b\9\0\2\6\9\5\2\d\1 ]] 00:12:08.991 14:48:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:12:08.991 14:48:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:12:08.991 14:48:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:12:09.249 14:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 13288272-272c-4d16-bf9e-2406f80049fd == \1\3\2\8\8\2\7\2\-\2\7\2\c\-\4\d\1\6\-\b\f\9\e\-\2\4\0\6\f\8\0\0\4\9\f\d ]] 00:12:09.249 14:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:09.815 14:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:09.815 14:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 4f86b5c9-2e23-4fcd-84c3-86b9026952d1 00:12:09.815 14:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:12:09.815 14:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 4F86B5C92E234FCD84C386B9026952D1 00:12:09.815 14:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:09.815 14:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 4F86B5C92E234FCD84C386B9026952D1 00:12:09.815 14:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:09.815 14:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:09.815 14:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:09.815 14:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:09.815 14:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:09.815 14:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:09.815 14:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:09.815 14:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:09.815 14:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 4F86B5C92E234FCD84C386B9026952D1 00:12:10.073 [2024-12-11 14:48:52.822009] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:12:10.073 [2024-12-11 14:48:52.822059] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:12:10.073 [2024-12-11 14:48:52.822091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:10.073 request: 00:12:10.073 { 00:12:10.073 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:10.073 "namespace": { 00:12:10.073 "bdev_name": "invalid", 00:12:10.073 "nsid": 1, 00:12:10.073 "nguid": "4F86B5C92E234FCD84C386B9026952D1", 00:12:10.073 "no_auto_visible": false, 00:12:10.073 "hide_metadata": false 00:12:10.073 }, 00:12:10.073 "method": "nvmf_subsystem_add_ns", 00:12:10.073 "req_id": 1 00:12:10.073 } 00:12:10.073 Got JSON-RPC error response 00:12:10.073 response: 00:12:10.073 { 00:12:10.073 "code": -32602, 00:12:10.073 "message": "Invalid parameters" 00:12:10.073 } 00:12:10.073 14:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:10.073 14:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:10.073 14:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:10.073 14:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:10.073 14:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 4f86b5c9-2e23-4fcd-84c3-86b9026952d1 00:12:10.073 14:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:12:10.073 14:48:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 4F86B5C92E234FCD84C386B9026952D1 -i 00:12:10.638 14:48:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:12:12.535 14:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:12:12.535 14:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:12:12.535 14:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:12:12.793 14:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:12:12.793 14:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 639950 00:12:12.793 14:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 639950 ']' 00:12:12.793 14:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 639950 00:12:12.793 14:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:12:12.793 14:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:12.793 14:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 639950 00:12:12.793 14:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:12.793 14:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:12.793 14:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 639950' 00:12:12.793 killing process with pid 639950 00:12:12.793 14:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 639950 00:12:12.793 14:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 639950 00:12:13.359 14:48:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:13.617 14:48:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:12:13.617 14:48:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:12:13.617 14:48:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:13.617 14:48:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:12:13.617 14:48:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:13.617 14:48:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:12:13.617 14:48:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:13.617 14:48:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:13.617 rmmod nvme_tcp 00:12:13.617 rmmod nvme_fabrics 00:12:13.617 rmmod nvme_keyring 00:12:13.617 14:48:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:13.617 14:48:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:12:13.617 14:48:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:12:13.617 14:48:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 638332 ']' 00:12:13.617 14:48:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 638332 00:12:13.617 14:48:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 638332 ']' 00:12:13.617 14:48:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 638332 00:12:13.617 14:48:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:12:13.617 14:48:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:13.617 14:48:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 638332 00:12:13.617 14:48:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:13.617 14:48:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:13.617 14:48:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 638332' 00:12:13.617 killing process with pid 638332 00:12:13.617 14:48:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 638332 00:12:13.617 14:48:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 638332 00:12:13.885 14:48:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:13.885 14:48:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:13.885 14:48:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:13.885 14:48:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:12:13.885 14:48:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:12:13.885 14:48:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:13.885 14:48:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:12:13.885 14:48:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:13.885 14:48:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:13.885 14:48:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:13.885 14:48:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:13.885 14:48:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:15.858 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:15.858 00:12:15.858 real 0m24.886s 00:12:15.858 user 0m35.871s 00:12:15.858 sys 0m4.799s 00:12:15.858 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:15.858 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:15.858 ************************************ 00:12:15.858 END TEST nvmf_ns_masking 00:12:15.858 ************************************ 00:12:15.858 14:48:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:12:15.858 14:48:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:12:15.858 14:48:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:15.858 14:48:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:15.858 14:48:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:16.118 ************************************ 00:12:16.118 START TEST nvmf_nvme_cli 00:12:16.118 ************************************ 00:12:16.118 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:12:16.118 * Looking for test storage... 00:12:16.118 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:16.118 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:16.118 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lcov --version 00:12:16.118 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:16.118 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:16.118 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:16.118 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:16.118 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:16.118 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:12:16.118 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:12:16.118 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:12:16.118 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:12:16.118 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:12:16.118 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:12:16.118 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:12:16.118 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:16.118 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:12:16.118 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:12:16.118 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:16.118 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:16.118 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:12:16.118 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:12:16.118 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:16.118 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:12:16.118 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:12:16.118 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:12:16.118 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:12:16.118 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:16.118 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:12:16.118 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:12:16.118 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:16.118 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:16.118 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:12:16.118 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:16.118 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:16.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.118 --rc genhtml_branch_coverage=1 00:12:16.118 --rc genhtml_function_coverage=1 00:12:16.118 --rc genhtml_legend=1 00:12:16.118 --rc geninfo_all_blocks=1 00:12:16.118 --rc geninfo_unexecuted_blocks=1 00:12:16.118 00:12:16.118 ' 00:12:16.118 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:16.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.118 --rc genhtml_branch_coverage=1 00:12:16.118 --rc genhtml_function_coverage=1 00:12:16.118 --rc genhtml_legend=1 00:12:16.118 --rc geninfo_all_blocks=1 00:12:16.118 --rc geninfo_unexecuted_blocks=1 00:12:16.118 00:12:16.118 ' 00:12:16.118 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:16.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.118 --rc genhtml_branch_coverage=1 00:12:16.118 --rc genhtml_function_coverage=1 00:12:16.118 --rc genhtml_legend=1 00:12:16.118 --rc geninfo_all_blocks=1 00:12:16.118 --rc geninfo_unexecuted_blocks=1 00:12:16.118 00:12:16.118 ' 00:12:16.118 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:16.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.118 --rc genhtml_branch_coverage=1 00:12:16.118 --rc genhtml_function_coverage=1 00:12:16.118 --rc genhtml_legend=1 00:12:16.118 --rc geninfo_all_blocks=1 00:12:16.118 --rc geninfo_unexecuted_blocks=1 00:12:16.118 00:12:16.118 ' 00:12:16.118 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:16.118 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:12:16.118 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:16.118 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:16.118 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:16.118 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:16.118 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:16.118 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:16.118 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:16.118 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:16.118 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:16.118 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:16.118 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:16.118 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:16.118 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:16.118 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:16.118 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:16.118 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:16.118 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:16.118 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:12:16.118 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:16.118 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:16.119 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:16.119 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.119 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.119 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.119 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:12:16.119 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.119 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:12:16.119 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:16.119 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:16.119 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:16.119 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:16.119 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:16.119 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:16.119 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:16.119 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:16.119 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:16.119 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:16.119 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:16.119 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:16.119 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:12:16.119 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:12:16.119 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:16.119 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:16.119 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:16.119 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:16.119 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:16.119 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:16.119 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:16.119 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:16.119 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:16.119 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:16.119 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:12:16.119 14:48:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:18.663 14:49:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:18.663 14:49:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:12:18.663 14:49:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:18.663 14:49:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:18.663 14:49:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:18.663 14:49:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:18.663 14:49:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:18.663 14:49:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:12:18.663 14:49:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:18.663 14:49:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:12:18.663 14:49:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:12:18.663 14:49:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:12:18.663 14:49:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:12:18.663 14:49:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:12:18.663 14:49:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:12:18.663 14:49:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:18.663 14:49:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:18.663 14:49:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:18.663 14:49:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:18.663 14:49:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:18.663 14:49:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:18.663 14:49:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:18.663 14:49:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:18.663 14:49:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:18.663 14:49:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:18.663 14:49:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:18.663 14:49:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:18.663 14:49:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:18.663 14:49:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:18.663 14:49:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:18.663 14:49:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:18.663 14:49:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:18.663 14:49:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:18.663 14:49:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:18.663 14:49:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:18.663 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:18.663 14:49:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:18.663 14:49:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:18.663 14:49:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:18.663 14:49:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:18.663 14:49:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:18.663 14:49:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:18.663 14:49:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:18.663 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:18.663 14:49:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:18.663 14:49:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:18.663 14:49:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:18.663 14:49:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:18.663 14:49:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:18.663 14:49:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:18.663 14:49:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:18.663 14:49:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:18.663 14:49:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:18.663 14:49:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:18.663 14:49:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:18.663 14:49:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:18.663 14:49:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:18.663 14:49:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:18.663 14:49:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:18.663 14:49:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:18.663 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:18.663 14:49:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:18.663 14:49:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:18.663 14:49:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:18.663 14:49:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:18.663 14:49:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:18.664 14:49:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:18.664 14:49:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:18.664 14:49:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:18.664 14:49:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:18.664 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:18.664 14:49:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:18.664 14:49:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:18.664 14:49:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:12:18.664 14:49:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:18.664 14:49:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:18.664 14:49:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:18.664 14:49:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:18.664 14:49:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:18.664 14:49:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:18.664 14:49:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:18.664 14:49:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:18.664 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:18.664 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:18.664 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:18.664 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:18.664 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:18.664 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:18.664 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:18.664 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:18.664 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:18.664 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:18.664 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:18.664 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:18.664 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:18.664 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:18.664 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:18.664 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:18.664 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:18.664 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:18.664 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:18.664 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.243 ms 00:12:18.664 00:12:18.664 --- 10.0.0.2 ping statistics --- 00:12:18.664 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:18.664 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:12:18.664 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:18.664 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:18.664 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:12:18.664 00:12:18.664 --- 10.0.0.1 ping statistics --- 00:12:18.664 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:18.664 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:12:18.664 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:18.664 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:12:18.664 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:18.664 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:18.664 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:18.664 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:18.664 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:18.664 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:18.664 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:18.664 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:12:18.664 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:18.664 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:18.664 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:18.664 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=642868 00:12:18.664 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 642868 00:12:18.664 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 642868 ']' 00:12:18.664 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:18.664 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:18.664 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:18.664 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:18.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:18.664 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:18.664 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:18.664 [2024-12-11 14:49:01.207662] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:12:18.664 [2024-12-11 14:49:01.207750] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:18.664 [2024-12-11 14:49:01.278693] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:18.664 [2024-12-11 14:49:01.334488] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:18.664 [2024-12-11 14:49:01.334564] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:18.664 [2024-12-11 14:49:01.334579] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:18.664 [2024-12-11 14:49:01.334604] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:18.664 [2024-12-11 14:49:01.334614] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:18.664 [2024-12-11 14:49:01.336233] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:12:18.664 [2024-12-11 14:49:01.336292] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:12:18.664 [2024-12-11 14:49:01.336400] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:12:18.664 [2024-12-11 14:49:01.336403] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:18.924 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:18.924 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:12:18.924 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:18.924 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:18.924 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:18.924 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:18.924 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:18.924 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.924 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:18.924 [2024-12-11 14:49:01.485116] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:18.924 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.924 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:18.924 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.924 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:18.924 Malloc0 00:12:18.924 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.924 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:18.924 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.924 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:18.924 Malloc1 00:12:18.924 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.924 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:12:18.924 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.924 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:18.924 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.924 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:18.924 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.924 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:18.924 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.924 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:18.924 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.924 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:18.924 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.924 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:18.924 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.924 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:18.924 [2024-12-11 14:49:01.587502] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:18.924 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.924 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:18.924 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.924 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:18.924 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.924 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:12:19.183 00:12:19.183 Discovery Log Number of Records 2, Generation counter 2 00:12:19.183 =====Discovery Log Entry 0====== 00:12:19.183 trtype: tcp 00:12:19.183 adrfam: ipv4 00:12:19.183 subtype: current discovery subsystem 00:12:19.183 treq: not required 00:12:19.183 portid: 0 00:12:19.183 trsvcid: 4420 00:12:19.183 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:19.183 traddr: 10.0.0.2 00:12:19.183 eflags: explicit discovery connections, duplicate discovery information 00:12:19.183 sectype: none 00:12:19.183 =====Discovery Log Entry 1====== 00:12:19.183 trtype: tcp 00:12:19.183 adrfam: ipv4 00:12:19.183 subtype: nvme subsystem 00:12:19.183 treq: not required 00:12:19.183 portid: 0 00:12:19.183 trsvcid: 4420 00:12:19.183 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:19.183 traddr: 10.0.0.2 00:12:19.183 eflags: none 00:12:19.183 sectype: none 00:12:19.183 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:12:19.183 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:12:19.184 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:12:19.184 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:19.184 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:12:19.184 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:12:19.184 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:19.184 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:12:19.184 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:19.184 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:12:19.184 14:49:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:19.750 14:49:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:19.751 14:49:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:12:19.751 14:49:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:19.751 14:49:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:12:19.751 14:49:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:12:19.751 14:49:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:12:22.282 14:49:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:22.282 14:49:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:22.282 14:49:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:22.282 14:49:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:12:22.282 14:49:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:22.282 14:49:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:12:22.282 14:49:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:12:22.282 14:49:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:12:22.282 14:49:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:22.282 14:49:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:12:22.282 14:49:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:12:22.282 14:49:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:22.282 14:49:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:12:22.282 14:49:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:22.282 14:49:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:22.282 14:49:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:12:22.282 14:49:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:22.282 14:49:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:22.282 14:49:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:12:22.282 14:49:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:22.282 14:49:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:12:22.282 /dev/nvme0n2 ]] 00:12:22.283 14:49:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:12:22.283 14:49:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:12:22.283 14:49:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:12:22.283 14:49:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:22.283 14:49:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:12:22.283 14:49:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:12:22.283 14:49:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:22.283 14:49:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:12:22.283 14:49:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:22.283 14:49:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:22.283 14:49:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:12:22.283 14:49:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:22.283 14:49:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:22.283 14:49:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:12:22.283 14:49:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:22.283 14:49:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:12:22.283 14:49:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:22.283 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:22.283 14:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:22.283 14:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:12:22.283 14:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:22.283 14:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:22.283 14:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:22.283 14:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:22.541 14:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:12:22.541 14:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:12:22.542 14:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:22.542 14:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.542 14:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:22.542 14:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.542 14:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:12:22.542 14:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:12:22.542 14:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:22.542 14:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:12:22.542 14:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:22.542 14:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:12:22.542 14:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:22.542 14:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:22.542 rmmod nvme_tcp 00:12:22.542 rmmod nvme_fabrics 00:12:22.542 rmmod nvme_keyring 00:12:22.542 14:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:22.542 14:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:12:22.542 14:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:12:22.542 14:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 642868 ']' 00:12:22.542 14:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 642868 00:12:22.542 14:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 642868 ']' 00:12:22.542 14:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 642868 00:12:22.542 14:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:12:22.542 14:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:22.542 14:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 642868 00:12:22.542 14:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:22.542 14:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:22.542 14:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 642868' 00:12:22.542 killing process with pid 642868 00:12:22.542 14:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 642868 00:12:22.542 14:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 642868 00:12:22.803 14:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:22.803 14:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:22.803 14:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:22.803 14:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:12:22.803 14:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:12:22.803 14:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:22.803 14:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:12:22.803 14:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:22.803 14:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:22.803 14:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:22.803 14:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:22.803 14:49:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:24.716 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:24.716 00:12:24.716 real 0m8.836s 00:12:24.716 user 0m16.955s 00:12:24.716 sys 0m2.338s 00:12:24.716 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:24.716 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:24.716 ************************************ 00:12:24.716 END TEST nvmf_nvme_cli 00:12:24.716 ************************************ 00:12:24.976 14:49:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:12:24.976 14:49:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:24.976 14:49:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:24.976 14:49:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:24.976 14:49:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:24.976 ************************************ 00:12:24.976 START TEST nvmf_vfio_user 00:12:24.976 ************************************ 00:12:24.976 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:24.976 * Looking for test storage... 00:12:24.976 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:24.976 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:24.976 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lcov --version 00:12:24.976 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:24.976 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:24.976 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:24.976 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:24.976 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:24.976 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:12:24.976 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:12:24.977 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:12:24.977 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:12:24.977 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:12:24.977 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:12:24.977 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:12:24.977 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:24.977 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:12:24.977 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:12:24.977 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:24.977 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:24.977 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:12:24.977 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:12:24.977 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:24.977 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:12:24.977 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:12:24.977 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:12:24.977 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:12:24.977 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:24.977 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:12:24.977 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:12:24.977 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:24.977 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:24.977 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:12:24.977 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:24.977 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:24.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:24.977 --rc genhtml_branch_coverage=1 00:12:24.977 --rc genhtml_function_coverage=1 00:12:24.977 --rc genhtml_legend=1 00:12:24.977 --rc geninfo_all_blocks=1 00:12:24.977 --rc geninfo_unexecuted_blocks=1 00:12:24.977 00:12:24.977 ' 00:12:24.977 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:24.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:24.977 --rc genhtml_branch_coverage=1 00:12:24.977 --rc genhtml_function_coverage=1 00:12:24.977 --rc genhtml_legend=1 00:12:24.977 --rc geninfo_all_blocks=1 00:12:24.977 --rc geninfo_unexecuted_blocks=1 00:12:24.977 00:12:24.977 ' 00:12:24.977 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:24.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:24.977 --rc genhtml_branch_coverage=1 00:12:24.977 --rc genhtml_function_coverage=1 00:12:24.977 --rc genhtml_legend=1 00:12:24.977 --rc geninfo_all_blocks=1 00:12:24.977 --rc geninfo_unexecuted_blocks=1 00:12:24.977 00:12:24.977 ' 00:12:24.977 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:24.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:24.977 --rc genhtml_branch_coverage=1 00:12:24.977 --rc genhtml_function_coverage=1 00:12:24.977 --rc genhtml_legend=1 00:12:24.977 --rc geninfo_all_blocks=1 00:12:24.977 --rc geninfo_unexecuted_blocks=1 00:12:24.977 00:12:24.977 ' 00:12:24.977 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:24.977 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:12:24.977 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:24.977 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:24.977 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:24.977 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:24.977 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:24.977 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:24.977 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:24.977 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:24.977 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:24.977 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:24.977 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:24.977 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:24.977 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:24.977 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:24.977 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:24.977 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:24.977 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:24.977 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:12:24.977 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:24.977 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:24.977 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:24.977 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.977 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.977 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.977 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:12:24.977 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.977 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:12:24.977 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:24.977 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:24.977 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:24.977 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:24.977 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:24.977 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:24.977 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:24.977 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:24.977 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:24.977 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:24.977 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:12:24.977 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:24.977 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:12:24.977 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:24.977 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:12:24.977 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:12:24.977 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:12:24.977 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:12:24.977 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:12:24.978 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:12:24.978 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=643801 00:12:24.978 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:12:24.978 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 643801' 00:12:24.978 Process pid: 643801 00:12:24.978 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:24.978 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 643801 00:12:24.978 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 643801 ']' 00:12:24.978 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:24.978 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:24.978 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:24.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:24.978 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:24.978 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:25.237 [2024-12-11 14:49:07.761333] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:12:25.237 [2024-12-11 14:49:07.761427] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:25.237 [2024-12-11 14:49:07.828478] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:25.237 [2024-12-11 14:49:07.886077] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:25.237 [2024-12-11 14:49:07.886130] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:25.237 [2024-12-11 14:49:07.886158] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:25.237 [2024-12-11 14:49:07.886169] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:25.237 [2024-12-11 14:49:07.886178] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:25.237 [2024-12-11 14:49:07.887617] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:12:25.237 [2024-12-11 14:49:07.887676] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:12:25.237 [2024-12-11 14:49:07.887744] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:12:25.237 [2024-12-11 14:49:07.887747] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:25.237 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:25.237 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:12:25.237 14:49:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:12:26.614 14:49:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:12:26.614 14:49:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:12:26.614 14:49:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:12:26.614 14:49:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:26.614 14:49:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:12:26.614 14:49:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:26.873 Malloc1 00:12:26.873 14:49:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:12:27.131 14:49:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:12:27.390 14:49:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:27.647 14:49:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:27.648 14:49:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:27.648 14:49:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:28.213 Malloc2 00:12:28.213 14:49:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:12:28.213 14:49:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:12:28.780 14:49:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:12:28.780 14:49:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:12:28.780 14:49:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:12:28.780 14:49:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:28.780 14:49:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:28.780 14:49:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:12:28.780 14:49:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:28.780 [2024-12-11 14:49:11.530560] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:12:28.780 [2024-12-11 14:49:11.530606] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid644222 ] 00:12:29.041 [2024-12-11 14:49:11.582746] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:12:29.041 [2024-12-11 14:49:11.585263] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:29.041 [2024-12-11 14:49:11.585298] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f8384f3b000 00:12:29.041 [2024-12-11 14:49:11.586253] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:29.041 [2024-12-11 14:49:11.587253] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:29.041 [2024-12-11 14:49:11.588256] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:29.042 [2024-12-11 14:49:11.589263] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:29.042 [2024-12-11 14:49:11.590268] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:29.042 [2024-12-11 14:49:11.591274] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:29.042 [2024-12-11 14:49:11.592279] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:29.042 [2024-12-11 14:49:11.593288] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:29.042 [2024-12-11 14:49:11.594293] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:29.042 [2024-12-11 14:49:11.594313] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f8384f30000 00:12:29.042 [2024-12-11 14:49:11.595435] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:29.042 [2024-12-11 14:49:11.609162] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:12:29.042 [2024-12-11 14:49:11.609210] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:12:29.042 [2024-12-11 14:49:11.618431] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:29.042 [2024-12-11 14:49:11.618492] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:29.042 [2024-12-11 14:49:11.618621] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:12:29.042 [2024-12-11 14:49:11.618659] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:12:29.042 [2024-12-11 14:49:11.618671] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:12:29.042 [2024-12-11 14:49:11.619424] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:12:29.042 [2024-12-11 14:49:11.619445] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:12:29.042 [2024-12-11 14:49:11.619458] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:12:29.042 [2024-12-11 14:49:11.620427] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:29.042 [2024-12-11 14:49:11.620447] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:12:29.042 [2024-12-11 14:49:11.620461] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:12:29.042 [2024-12-11 14:49:11.621434] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:12:29.042 [2024-12-11 14:49:11.621454] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:29.042 [2024-12-11 14:49:11.622442] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:12:29.042 [2024-12-11 14:49:11.622462] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:12:29.042 [2024-12-11 14:49:11.622471] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:12:29.042 [2024-12-11 14:49:11.622482] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:29.042 [2024-12-11 14:49:11.622593] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:12:29.042 [2024-12-11 14:49:11.622604] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:29.042 [2024-12-11 14:49:11.622613] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:12:29.042 [2024-12-11 14:49:11.623449] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:12:29.042 [2024-12-11 14:49:11.624454] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:12:29.042 [2024-12-11 14:49:11.625458] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:29.042 [2024-12-11 14:49:11.626449] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:29.042 [2024-12-11 14:49:11.626577] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:29.042 [2024-12-11 14:49:11.627465] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:12:29.042 [2024-12-11 14:49:11.627483] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:29.042 [2024-12-11 14:49:11.627492] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:12:29.042 [2024-12-11 14:49:11.627516] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:12:29.042 [2024-12-11 14:49:11.627553] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:12:29.042 [2024-12-11 14:49:11.627585] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:29.042 [2024-12-11 14:49:11.627596] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:29.042 [2024-12-11 14:49:11.627603] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:29.042 [2024-12-11 14:49:11.627625] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:29.042 [2024-12-11 14:49:11.627703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:29.042 [2024-12-11 14:49:11.627725] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:12:29.042 [2024-12-11 14:49:11.627734] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:12:29.042 [2024-12-11 14:49:11.627741] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:12:29.042 [2024-12-11 14:49:11.627750] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:29.042 [2024-12-11 14:49:11.627758] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:12:29.042 [2024-12-11 14:49:11.627766] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:12:29.042 [2024-12-11 14:49:11.627773] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:12:29.042 [2024-12-11 14:49:11.627791] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:12:29.042 [2024-12-11 14:49:11.627811] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:29.042 [2024-12-11 14:49:11.627841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:29.042 [2024-12-11 14:49:11.627861] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:29.042 [2024-12-11 14:49:11.627878] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:29.042 [2024-12-11 14:49:11.627890] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:29.042 [2024-12-11 14:49:11.627918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:29.043 [2024-12-11 14:49:11.627926] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:12:29.043 [2024-12-11 14:49:11.627941] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:29.043 [2024-12-11 14:49:11.627955] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:29.043 [2024-12-11 14:49:11.627967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:29.043 [2024-12-11 14:49:11.627979] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:12:29.043 [2024-12-11 14:49:11.627987] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:29.043 [2024-12-11 14:49:11.627998] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:12:29.043 [2024-12-11 14:49:11.628008] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:12:29.043 [2024-12-11 14:49:11.628020] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:29.043 [2024-12-11 14:49:11.628039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:29.043 [2024-12-11 14:49:11.628103] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:12:29.043 [2024-12-11 14:49:11.628122] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:12:29.043 [2024-12-11 14:49:11.628138] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:29.043 [2024-12-11 14:49:11.628146] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:29.043 [2024-12-11 14:49:11.628152] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:29.043 [2024-12-11 14:49:11.628161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:29.043 [2024-12-11 14:49:11.628177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:29.043 [2024-12-11 14:49:11.628201] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:12:29.043 [2024-12-11 14:49:11.628217] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:12:29.043 [2024-12-11 14:49:11.628233] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:12:29.043 [2024-12-11 14:49:11.628245] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:29.043 [2024-12-11 14:49:11.628253] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:29.043 [2024-12-11 14:49:11.628262] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:29.043 [2024-12-11 14:49:11.628272] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:29.043 [2024-12-11 14:49:11.628309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:29.043 [2024-12-11 14:49:11.628334] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:29.043 [2024-12-11 14:49:11.628350] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:29.043 [2024-12-11 14:49:11.628363] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:29.043 [2024-12-11 14:49:11.628370] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:29.043 [2024-12-11 14:49:11.628376] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:29.043 [2024-12-11 14:49:11.628385] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:29.043 [2024-12-11 14:49:11.628401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:29.043 [2024-12-11 14:49:11.628416] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:29.043 [2024-12-11 14:49:11.628427] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:12:29.043 [2024-12-11 14:49:11.628440] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:12:29.043 [2024-12-11 14:49:11.628451] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:12:29.043 [2024-12-11 14:49:11.628459] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:29.043 [2024-12-11 14:49:11.628468] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:12:29.043 [2024-12-11 14:49:11.628477] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:12:29.043 [2024-12-11 14:49:11.628484] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:12:29.043 [2024-12-11 14:49:11.628492] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:12:29.043 [2024-12-11 14:49:11.628522] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:29.043 [2024-12-11 14:49:11.628564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:29.043 [2024-12-11 14:49:11.628586] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:29.043 [2024-12-11 14:49:11.628614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:29.043 [2024-12-11 14:49:11.628632] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:29.043 [2024-12-11 14:49:11.628644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:29.043 [2024-12-11 14:49:11.628665] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:29.043 [2024-12-11 14:49:11.628678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:29.043 [2024-12-11 14:49:11.628703] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:29.043 [2024-12-11 14:49:11.628713] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:29.043 [2024-12-11 14:49:11.628720] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:29.043 [2024-12-11 14:49:11.628726] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:29.043 [2024-12-11 14:49:11.628732] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:12:29.043 [2024-12-11 14:49:11.628742] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:29.043 [2024-12-11 14:49:11.628754] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:29.043 [2024-12-11 14:49:11.628762] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:29.043 [2024-12-11 14:49:11.628768] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:29.043 [2024-12-11 14:49:11.628777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:29.043 [2024-12-11 14:49:11.628789] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:29.043 [2024-12-11 14:49:11.628797] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:29.043 [2024-12-11 14:49:11.628803] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:29.043 [2024-12-11 14:49:11.628812] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:29.043 [2024-12-11 14:49:11.628840] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:29.043 [2024-12-11 14:49:11.628848] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:29.043 [2024-12-11 14:49:11.628854] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:29.043 [2024-12-11 14:49:11.628863] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:29.043 [2024-12-11 14:49:11.628875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:29.043 [2024-12-11 14:49:11.628912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:29.043 [2024-12-11 14:49:11.628931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:29.043 [2024-12-11 14:49:11.628943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:29.043 ===================================================== 00:12:29.043 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:29.043 ===================================================== 00:12:29.043 Controller Capabilities/Features 00:12:29.043 ================================ 00:12:29.043 Vendor ID: 4e58 00:12:29.044 Subsystem Vendor ID: 4e58 00:12:29.044 Serial Number: SPDK1 00:12:29.044 Model Number: SPDK bdev Controller 00:12:29.044 Firmware Version: 25.01 00:12:29.044 Recommended Arb Burst: 6 00:12:29.044 IEEE OUI Identifier: 8d 6b 50 00:12:29.044 Multi-path I/O 00:12:29.044 May have multiple subsystem ports: Yes 00:12:29.044 May have multiple controllers: Yes 00:12:29.044 Associated with SR-IOV VF: No 00:12:29.044 Max Data Transfer Size: 131072 00:12:29.044 Max Number of Namespaces: 32 00:12:29.044 Max Number of I/O Queues: 127 00:12:29.044 NVMe Specification Version (VS): 1.3 00:12:29.044 NVMe Specification Version (Identify): 1.3 00:12:29.044 Maximum Queue Entries: 256 00:12:29.044 Contiguous Queues Required: Yes 00:12:29.044 Arbitration Mechanisms Supported 00:12:29.044 Weighted Round Robin: Not Supported 00:12:29.044 Vendor Specific: Not Supported 00:12:29.044 Reset Timeout: 15000 ms 00:12:29.044 Doorbell Stride: 4 bytes 00:12:29.044 NVM Subsystem Reset: Not Supported 00:12:29.044 Command Sets Supported 00:12:29.044 NVM Command Set: Supported 00:12:29.044 Boot Partition: Not Supported 00:12:29.044 Memory Page Size Minimum: 4096 bytes 00:12:29.044 Memory Page Size Maximum: 4096 bytes 00:12:29.044 Persistent Memory Region: Not Supported 00:12:29.044 Optional Asynchronous Events Supported 00:12:29.044 Namespace Attribute Notices: Supported 00:12:29.044 Firmware Activation Notices: Not Supported 00:12:29.044 ANA Change Notices: Not Supported 00:12:29.044 PLE Aggregate Log Change Notices: Not Supported 00:12:29.044 LBA Status Info Alert Notices: Not Supported 00:12:29.044 EGE Aggregate Log Change Notices: Not Supported 00:12:29.044 Normal NVM Subsystem Shutdown event: Not Supported 00:12:29.044 Zone Descriptor Change Notices: Not Supported 00:12:29.044 Discovery Log Change Notices: Not Supported 00:12:29.044 Controller Attributes 00:12:29.044 128-bit Host Identifier: Supported 00:12:29.044 Non-Operational Permissive Mode: Not Supported 00:12:29.044 NVM Sets: Not Supported 00:12:29.044 Read Recovery Levels: Not Supported 00:12:29.044 Endurance Groups: Not Supported 00:12:29.044 Predictable Latency Mode: Not Supported 00:12:29.044 Traffic Based Keep ALive: Not Supported 00:12:29.044 Namespace Granularity: Not Supported 00:12:29.044 SQ Associations: Not Supported 00:12:29.044 UUID List: Not Supported 00:12:29.044 Multi-Domain Subsystem: Not Supported 00:12:29.044 Fixed Capacity Management: Not Supported 00:12:29.044 Variable Capacity Management: Not Supported 00:12:29.044 Delete Endurance Group: Not Supported 00:12:29.044 Delete NVM Set: Not Supported 00:12:29.044 Extended LBA Formats Supported: Not Supported 00:12:29.044 Flexible Data Placement Supported: Not Supported 00:12:29.044 00:12:29.044 Controller Memory Buffer Support 00:12:29.044 ================================ 00:12:29.044 Supported: No 00:12:29.044 00:12:29.044 Persistent Memory Region Support 00:12:29.044 ================================ 00:12:29.044 Supported: No 00:12:29.044 00:12:29.044 Admin Command Set Attributes 00:12:29.044 ============================ 00:12:29.044 Security Send/Receive: Not Supported 00:12:29.044 Format NVM: Not Supported 00:12:29.044 Firmware Activate/Download: Not Supported 00:12:29.044 Namespace Management: Not Supported 00:12:29.044 Device Self-Test: Not Supported 00:12:29.044 Directives: Not Supported 00:12:29.044 NVMe-MI: Not Supported 00:12:29.044 Virtualization Management: Not Supported 00:12:29.044 Doorbell Buffer Config: Not Supported 00:12:29.044 Get LBA Status Capability: Not Supported 00:12:29.044 Command & Feature Lockdown Capability: Not Supported 00:12:29.044 Abort Command Limit: 4 00:12:29.044 Async Event Request Limit: 4 00:12:29.044 Number of Firmware Slots: N/A 00:12:29.044 Firmware Slot 1 Read-Only: N/A 00:12:29.044 Firmware Activation Without Reset: N/A 00:12:29.044 Multiple Update Detection Support: N/A 00:12:29.044 Firmware Update Granularity: No Information Provided 00:12:29.044 Per-Namespace SMART Log: No 00:12:29.044 Asymmetric Namespace Access Log Page: Not Supported 00:12:29.044 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:12:29.044 Command Effects Log Page: Supported 00:12:29.044 Get Log Page Extended Data: Supported 00:12:29.044 Telemetry Log Pages: Not Supported 00:12:29.044 Persistent Event Log Pages: Not Supported 00:12:29.044 Supported Log Pages Log Page: May Support 00:12:29.044 Commands Supported & Effects Log Page: Not Supported 00:12:29.044 Feature Identifiers & Effects Log Page:May Support 00:12:29.044 NVMe-MI Commands & Effects Log Page: May Support 00:12:29.044 Data Area 4 for Telemetry Log: Not Supported 00:12:29.044 Error Log Page Entries Supported: 128 00:12:29.044 Keep Alive: Supported 00:12:29.044 Keep Alive Granularity: 10000 ms 00:12:29.044 00:12:29.044 NVM Command Set Attributes 00:12:29.044 ========================== 00:12:29.044 Submission Queue Entry Size 00:12:29.044 Max: 64 00:12:29.044 Min: 64 00:12:29.044 Completion Queue Entry Size 00:12:29.044 Max: 16 00:12:29.044 Min: 16 00:12:29.044 Number of Namespaces: 32 00:12:29.044 Compare Command: Supported 00:12:29.044 Write Uncorrectable Command: Not Supported 00:12:29.044 Dataset Management Command: Supported 00:12:29.044 Write Zeroes Command: Supported 00:12:29.044 Set Features Save Field: Not Supported 00:12:29.045 Reservations: Not Supported 00:12:29.045 Timestamp: Not Supported 00:12:29.045 Copy: Supported 00:12:29.045 Volatile Write Cache: Present 00:12:29.045 Atomic Write Unit (Normal): 1 00:12:29.045 Atomic Write Unit (PFail): 1 00:12:29.045 Atomic Compare & Write Unit: 1 00:12:29.045 Fused Compare & Write: Supported 00:12:29.045 Scatter-Gather List 00:12:29.045 SGL Command Set: Supported (Dword aligned) 00:12:29.045 SGL Keyed: Not Supported 00:12:29.045 SGL Bit Bucket Descriptor: Not Supported 00:12:29.045 SGL Metadata Pointer: Not Supported 00:12:29.045 Oversized SGL: Not Supported 00:12:29.045 SGL Metadata Address: Not Supported 00:12:29.045 SGL Offset: Not Supported 00:12:29.045 Transport SGL Data Block: Not Supported 00:12:29.045 Replay Protected Memory Block: Not Supported 00:12:29.045 00:12:29.045 Firmware Slot Information 00:12:29.045 ========================= 00:12:29.045 Active slot: 1 00:12:29.045 Slot 1 Firmware Revision: 25.01 00:12:29.045 00:12:29.045 00:12:29.045 Commands Supported and Effects 00:12:29.045 ============================== 00:12:29.045 Admin Commands 00:12:29.045 -------------- 00:12:29.045 Get Log Page (02h): Supported 00:12:29.045 Identify (06h): Supported 00:12:29.045 Abort (08h): Supported 00:12:29.045 Set Features (09h): Supported 00:12:29.045 Get Features (0Ah): Supported 00:12:29.045 Asynchronous Event Request (0Ch): Supported 00:12:29.045 Keep Alive (18h): Supported 00:12:29.045 I/O Commands 00:12:29.045 ------------ 00:12:29.045 Flush (00h): Supported LBA-Change 00:12:29.045 Write (01h): Supported LBA-Change 00:12:29.045 Read (02h): Supported 00:12:29.045 Compare (05h): Supported 00:12:29.045 Write Zeroes (08h): Supported LBA-Change 00:12:29.045 Dataset Management (09h): Supported LBA-Change 00:12:29.045 Copy (19h): Supported LBA-Change 00:12:29.045 00:12:29.045 Error Log 00:12:29.045 ========= 00:12:29.045 00:12:29.045 Arbitration 00:12:29.045 =========== 00:12:29.045 Arbitration Burst: 1 00:12:29.045 00:12:29.045 Power Management 00:12:29.045 ================ 00:12:29.045 Number of Power States: 1 00:12:29.045 Current Power State: Power State #0 00:12:29.045 Power State #0: 00:12:29.045 Max Power: 0.00 W 00:12:29.045 Non-Operational State: Operational 00:12:29.045 Entry Latency: Not Reported 00:12:29.045 Exit Latency: Not Reported 00:12:29.045 Relative Read Throughput: 0 00:12:29.045 Relative Read Latency: 0 00:12:29.045 Relative Write Throughput: 0 00:12:29.045 Relative Write Latency: 0 00:12:29.045 Idle Power: Not Reported 00:12:29.045 Active Power: Not Reported 00:12:29.045 Non-Operational Permissive Mode: Not Supported 00:12:29.045 00:12:29.045 Health Information 00:12:29.045 ================== 00:12:29.045 Critical Warnings: 00:12:29.045 Available Spare Space: OK 00:12:29.045 Temperature: OK 00:12:29.045 Device Reliability: OK 00:12:29.045 Read Only: No 00:12:29.045 Volatile Memory Backup: OK 00:12:29.045 Current Temperature: 0 Kelvin (-273 Celsius) 00:12:29.045 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:29.045 Available Spare: 0% 00:12:29.045 Available Sp[2024-12-11 14:49:11.629066] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:29.045 [2024-12-11 14:49:11.629083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:29.045 [2024-12-11 14:49:11.629129] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:12:29.045 [2024-12-11 14:49:11.629148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:29.045 [2024-12-11 14:49:11.629163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:29.045 [2024-12-11 14:49:11.629173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:29.045 [2024-12-11 14:49:11.629183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:29.045 [2024-12-11 14:49:11.629474] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:29.045 [2024-12-11 14:49:11.629497] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:12:29.045 [2024-12-11 14:49:11.630475] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:29.045 [2024-12-11 14:49:11.630566] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:12:29.045 [2024-12-11 14:49:11.630595] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:12:29.045 [2024-12-11 14:49:11.631489] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:12:29.045 [2024-12-11 14:49:11.631512] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:12:29.045 [2024-12-11 14:49:11.631598] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:12:29.045 [2024-12-11 14:49:11.634556] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:29.045 are Threshold: 0% 00:12:29.045 Life Percentage Used: 0% 00:12:29.045 Data Units Read: 0 00:12:29.045 Data Units Written: 0 00:12:29.045 Host Read Commands: 0 00:12:29.045 Host Write Commands: 0 00:12:29.045 Controller Busy Time: 0 minutes 00:12:29.045 Power Cycles: 0 00:12:29.045 Power On Hours: 0 hours 00:12:29.045 Unsafe Shutdowns: 0 00:12:29.045 Unrecoverable Media Errors: 0 00:12:29.045 Lifetime Error Log Entries: 0 00:12:29.045 Warning Temperature Time: 0 minutes 00:12:29.045 Critical Temperature Time: 0 minutes 00:12:29.045 00:12:29.045 Number of Queues 00:12:29.045 ================ 00:12:29.045 Number of I/O Submission Queues: 127 00:12:29.045 Number of I/O Completion Queues: 127 00:12:29.045 00:12:29.045 Active Namespaces 00:12:29.045 ================= 00:12:29.045 Namespace ID:1 00:12:29.045 Error Recovery Timeout: Unlimited 00:12:29.045 Command Set Identifier: NVM (00h) 00:12:29.045 Deallocate: Supported 00:12:29.045 Deallocated/Unwritten Error: Not Supported 00:12:29.045 Deallocated Read Value: Unknown 00:12:29.045 Deallocate in Write Zeroes: Not Supported 00:12:29.045 Deallocated Guard Field: 0xFFFF 00:12:29.045 Flush: Supported 00:12:29.045 Reservation: Supported 00:12:29.045 Namespace Sharing Capabilities: Multiple Controllers 00:12:29.045 Size (in LBAs): 131072 (0GiB) 00:12:29.045 Capacity (in LBAs): 131072 (0GiB) 00:12:29.045 Utilization (in LBAs): 131072 (0GiB) 00:12:29.045 NGUID: FCCE59DA9B7246D6B3C68DEA2C9732CF 00:12:29.045 UUID: fcce59da-9b72-46d6-b3c6-8dea2c9732cf 00:12:29.045 Thin Provisioning: Not Supported 00:12:29.045 Per-NS Atomic Units: Yes 00:12:29.045 Atomic Boundary Size (Normal): 0 00:12:29.045 Atomic Boundary Size (PFail): 0 00:12:29.045 Atomic Boundary Offset: 0 00:12:29.045 Maximum Single Source Range Length: 65535 00:12:29.045 Maximum Copy Length: 65535 00:12:29.045 Maximum Source Range Count: 1 00:12:29.045 NGUID/EUI64 Never Reused: No 00:12:29.045 Namespace Write Protected: No 00:12:29.045 Number of LBA Formats: 1 00:12:29.045 Current LBA Format: LBA Format #00 00:12:29.046 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:29.046 00:12:29.046 14:49:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:29.305 [2024-12-11 14:49:11.885445] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:34.580 Initializing NVMe Controllers 00:12:34.580 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:34.580 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:34.580 Initialization complete. Launching workers. 00:12:34.580 ======================================================== 00:12:34.580 Latency(us) 00:12:34.580 Device Information : IOPS MiB/s Average min max 00:12:34.580 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 31156.99 121.71 4110.02 1229.14 10316.62 00:12:34.580 ======================================================== 00:12:34.580 Total : 31156.99 121.71 4110.02 1229.14 10316.62 00:12:34.580 00:12:34.580 [2024-12-11 14:49:16.907018] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:34.580 14:49:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:34.580 [2024-12-11 14:49:17.174272] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:39.849 Initializing NVMe Controllers 00:12:39.849 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:39.849 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:39.849 Initialization complete. Launching workers. 00:12:39.849 ======================================================== 00:12:39.849 Latency(us) 00:12:39.849 Device Information : IOPS MiB/s Average min max 00:12:39.849 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16050.15 62.70 7985.83 6953.56 11968.85 00:12:39.849 ======================================================== 00:12:39.849 Total : 16050.15 62.70 7985.83 6953.56 11968.85 00:12:39.849 00:12:39.849 [2024-12-11 14:49:22.212317] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:39.849 14:49:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:39.849 [2024-12-11 14:49:22.449505] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:45.119 [2024-12-11 14:49:27.528929] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:45.119 Initializing NVMe Controllers 00:12:45.119 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:45.119 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:45.119 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:12:45.119 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:12:45.119 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:12:45.119 Initialization complete. Launching workers. 00:12:45.119 Starting thread on core 2 00:12:45.119 Starting thread on core 3 00:12:45.119 Starting thread on core 1 00:12:45.119 14:49:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:12:45.119 [2024-12-11 14:49:27.854603] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:48.408 [2024-12-11 14:49:30.921267] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:48.408 Initializing NVMe Controllers 00:12:48.408 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:48.408 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:48.408 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:12:48.408 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:12:48.408 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:12:48.408 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:12:48.408 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:12:48.408 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:12:48.408 Initialization complete. Launching workers. 00:12:48.408 Starting thread on core 1 with urgent priority queue 00:12:48.408 Starting thread on core 2 with urgent priority queue 00:12:48.408 Starting thread on core 3 with urgent priority queue 00:12:48.408 Starting thread on core 0 with urgent priority queue 00:12:48.408 SPDK bdev Controller (SPDK1 ) core 0: 6239.67 IO/s 16.03 secs/100000 ios 00:12:48.408 SPDK bdev Controller (SPDK1 ) core 1: 6640.00 IO/s 15.06 secs/100000 ios 00:12:48.408 SPDK bdev Controller (SPDK1 ) core 2: 6657.33 IO/s 15.02 secs/100000 ios 00:12:48.408 SPDK bdev Controller (SPDK1 ) core 3: 6673.00 IO/s 14.99 secs/100000 ios 00:12:48.408 ======================================================== 00:12:48.408 00:12:48.408 14:49:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:48.666 [2024-12-11 14:49:31.249081] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:48.667 Initializing NVMe Controllers 00:12:48.667 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:48.667 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:48.667 Namespace ID: 1 size: 0GB 00:12:48.667 Initialization complete. 00:12:48.667 INFO: using host memory buffer for IO 00:12:48.667 Hello world! 00:12:48.667 [2024-12-11 14:49:31.282796] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:48.667 14:49:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:48.924 [2024-12-11 14:49:31.597745] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:49.860 Initializing NVMe Controllers 00:12:49.860 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:49.860 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:49.860 Initialization complete. Launching workers. 00:12:49.860 submit (in ns) avg, min, max = 7483.1, 3507.8, 4015631.1 00:12:49.860 complete (in ns) avg, min, max = 28461.8, 2063.3, 5000450.0 00:12:49.860 00:12:49.860 Submit histogram 00:12:49.860 ================ 00:12:49.860 Range in us Cumulative Count 00:12:49.860 3.484 - 3.508: 0.0079% ( 1) 00:12:49.860 3.508 - 3.532: 0.0714% ( 8) 00:12:49.860 3.532 - 3.556: 0.5078% ( 55) 00:12:49.860 3.556 - 3.579: 2.0630% ( 196) 00:12:49.860 3.579 - 3.603: 4.7528% ( 339) 00:12:49.860 3.603 - 3.627: 10.7831% ( 760) 00:12:49.860 3.627 - 3.650: 18.6860% ( 996) 00:12:49.860 3.650 - 3.674: 28.4377% ( 1229) 00:12:49.860 3.674 - 3.698: 37.3324% ( 1121) 00:12:49.860 3.698 - 3.721: 45.5447% ( 1035) 00:12:49.860 3.721 - 3.745: 51.0355% ( 692) 00:12:49.860 3.745 - 3.769: 56.0025% ( 626) 00:12:49.860 3.769 - 3.793: 60.2555% ( 536) 00:12:49.860 3.793 - 3.816: 63.3738% ( 393) 00:12:49.860 3.816 - 3.840: 66.6111% ( 408) 00:12:49.860 3.840 - 3.864: 70.6578% ( 510) 00:12:49.860 3.864 - 3.887: 74.5537% ( 491) 00:12:49.860 3.887 - 3.911: 78.7035% ( 523) 00:12:49.860 3.911 - 3.935: 82.5518% ( 485) 00:12:49.860 3.935 - 3.959: 85.2733% ( 343) 00:12:49.860 3.959 - 3.982: 87.3205% ( 258) 00:12:49.860 3.982 - 4.006: 88.9709% ( 208) 00:12:49.860 4.006 - 4.030: 90.2880% ( 166) 00:12:49.860 4.030 - 4.053: 91.5496% ( 159) 00:12:49.860 4.053 - 4.077: 92.4939% ( 119) 00:12:49.860 4.077 - 4.101: 93.2952% ( 101) 00:12:49.860 4.101 - 4.124: 93.9935% ( 88) 00:12:49.860 4.124 - 4.148: 94.6283% ( 80) 00:12:49.860 4.148 - 4.172: 95.0964% ( 59) 00:12:49.860 4.172 - 4.196: 95.4059% ( 39) 00:12:49.860 4.196 - 4.219: 95.6280% ( 28) 00:12:49.860 4.219 - 4.243: 95.7629% ( 17) 00:12:49.860 4.243 - 4.267: 95.9533% ( 24) 00:12:49.860 4.267 - 4.290: 96.1993% ( 31) 00:12:49.860 4.290 - 4.314: 96.3342% ( 17) 00:12:49.860 4.314 - 4.338: 96.4453% ( 14) 00:12:49.860 4.338 - 4.361: 96.5802% ( 17) 00:12:49.860 4.361 - 4.385: 96.6675% ( 11) 00:12:49.860 4.385 - 4.409: 96.7309% ( 8) 00:12:49.860 4.409 - 4.433: 96.7627% ( 4) 00:12:49.860 4.433 - 4.456: 96.8420% ( 10) 00:12:49.860 4.456 - 4.480: 96.8658% ( 3) 00:12:49.860 4.480 - 4.504: 96.8976% ( 4) 00:12:49.860 4.504 - 4.527: 96.9293% ( 4) 00:12:49.860 4.527 - 4.551: 96.9531% ( 3) 00:12:49.860 4.551 - 4.575: 96.9610% ( 1) 00:12:49.860 4.575 - 4.599: 96.9928% ( 4) 00:12:49.860 4.599 - 4.622: 97.0245% ( 4) 00:12:49.860 4.622 - 4.646: 97.0404% ( 2) 00:12:49.860 4.670 - 4.693: 97.0563% ( 2) 00:12:49.860 4.693 - 4.717: 97.0880% ( 4) 00:12:49.860 4.717 - 4.741: 97.0959% ( 1) 00:12:49.860 4.741 - 4.764: 97.1197% ( 3) 00:12:49.860 4.764 - 4.788: 97.1673% ( 6) 00:12:49.860 4.788 - 4.812: 97.1911% ( 3) 00:12:49.860 4.812 - 4.836: 97.2388% ( 6) 00:12:49.860 4.836 - 4.859: 97.2943% ( 7) 00:12:49.860 4.859 - 4.883: 97.3340% ( 5) 00:12:49.860 4.883 - 4.907: 97.3974% ( 8) 00:12:49.860 4.907 - 4.930: 97.4451% ( 6) 00:12:49.860 4.930 - 4.954: 97.5244% ( 10) 00:12:49.860 4.954 - 4.978: 97.6037% ( 10) 00:12:49.860 4.978 - 5.001: 97.6275% ( 3) 00:12:49.860 5.001 - 5.025: 97.6672% ( 5) 00:12:49.860 5.025 - 5.049: 97.7069% ( 5) 00:12:49.860 5.049 - 5.073: 97.7386% ( 4) 00:12:49.860 5.073 - 5.096: 97.7862% ( 6) 00:12:49.860 5.096 - 5.120: 97.8338% ( 6) 00:12:49.860 5.120 - 5.144: 97.8577% ( 3) 00:12:49.860 5.144 - 5.167: 97.8894% ( 4) 00:12:49.860 5.167 - 5.191: 97.9370% ( 6) 00:12:49.860 5.191 - 5.215: 97.9687% ( 4) 00:12:49.860 5.215 - 5.239: 97.9846% ( 2) 00:12:49.860 5.239 - 5.262: 98.0084% ( 3) 00:12:49.860 5.262 - 5.286: 98.0243% ( 2) 00:12:49.860 5.286 - 5.310: 98.0322% ( 1) 00:12:49.860 5.310 - 5.333: 98.0401% ( 1) 00:12:49.860 5.333 - 5.357: 98.0640% ( 3) 00:12:49.860 5.357 - 5.381: 98.0719% ( 1) 00:12:49.860 5.381 - 5.404: 98.0798% ( 1) 00:12:49.860 5.404 - 5.428: 98.0878% ( 1) 00:12:49.860 5.428 - 5.452: 98.1036% ( 2) 00:12:49.860 5.452 - 5.476: 98.1116% ( 1) 00:12:49.860 5.476 - 5.499: 98.1195% ( 1) 00:12:49.860 5.547 - 5.570: 98.1354% ( 2) 00:12:49.860 5.618 - 5.641: 98.1433% ( 1) 00:12:49.860 5.641 - 5.665: 98.1512% ( 1) 00:12:49.860 5.713 - 5.736: 98.1671% ( 2) 00:12:49.860 5.831 - 5.855: 98.1830% ( 2) 00:12:49.860 5.879 - 5.902: 98.1909% ( 1) 00:12:49.860 5.973 - 5.997: 98.1988% ( 1) 00:12:49.860 6.044 - 6.068: 98.2147% ( 2) 00:12:49.860 6.068 - 6.116: 98.2385% ( 3) 00:12:49.860 6.210 - 6.258: 98.2544% ( 2) 00:12:49.860 6.258 - 6.305: 98.2623% ( 1) 00:12:49.860 6.305 - 6.353: 98.2703% ( 1) 00:12:49.860 6.353 - 6.400: 98.2782% ( 1) 00:12:49.860 6.779 - 6.827: 98.2861% ( 1) 00:12:49.860 6.874 - 6.921: 98.2941% ( 1) 00:12:49.860 7.016 - 7.064: 98.3099% ( 2) 00:12:49.860 7.064 - 7.111: 98.3417% ( 4) 00:12:49.860 7.396 - 7.443: 98.3655% ( 3) 00:12:49.860 7.490 - 7.538: 98.3734% ( 1) 00:12:49.860 7.775 - 7.822: 98.3813% ( 1) 00:12:49.860 8.059 - 8.107: 98.3893% ( 1) 00:12:49.860 8.154 - 8.201: 98.3972% ( 1) 00:12:49.860 8.201 - 8.249: 98.4051% ( 1) 00:12:49.860 8.249 - 8.296: 98.4131% ( 1) 00:12:49.860 8.296 - 8.344: 98.4369% ( 3) 00:12:49.860 8.391 - 8.439: 98.4448% ( 1) 00:12:49.860 8.486 - 8.533: 98.4527% ( 1) 00:12:49.860 8.533 - 8.581: 98.4607% ( 1) 00:12:49.860 8.628 - 8.676: 98.4686% ( 1) 00:12:49.860 8.676 - 8.723: 98.4924% ( 3) 00:12:49.860 8.770 - 8.818: 98.5162% ( 3) 00:12:49.860 8.818 - 8.865: 98.5242% ( 1) 00:12:49.860 8.913 - 8.960: 98.5321% ( 1) 00:12:49.860 9.197 - 9.244: 98.5400% ( 1) 00:12:49.860 9.387 - 9.434: 98.5480% ( 1) 00:12:49.860 9.576 - 9.624: 98.5559% ( 1) 00:12:49.860 9.861 - 9.908: 98.5638% ( 1) 00:12:49.860 10.145 - 10.193: 98.5718% ( 1) 00:12:49.860 10.240 - 10.287: 98.5797% ( 1) 00:12:49.860 10.430 - 10.477: 98.5876% ( 1) 00:12:49.860 10.477 - 10.524: 98.5956% ( 1) 00:12:49.860 10.714 - 10.761: 98.6035% ( 1) 00:12:49.860 10.856 - 10.904: 98.6114% ( 1) 00:12:49.860 10.951 - 10.999: 98.6194% ( 1) 00:12:49.860 11.046 - 11.093: 98.6273% ( 1) 00:12:49.860 11.378 - 11.425: 98.6352% ( 1) 00:12:49.860 11.425 - 11.473: 98.6511% ( 2) 00:12:49.860 11.520 - 11.567: 98.6670% ( 2) 00:12:49.860 11.710 - 11.757: 98.6829% ( 2) 00:12:49.860 12.136 - 12.231: 98.6908% ( 1) 00:12:49.860 12.421 - 12.516: 98.6987% ( 1) 00:12:49.860 12.516 - 12.610: 98.7067% ( 1) 00:12:49.860 12.610 - 12.705: 98.7225% ( 2) 00:12:49.860 12.895 - 12.990: 98.7384% ( 2) 00:12:49.860 13.084 - 13.179: 98.7543% ( 2) 00:12:49.860 13.274 - 13.369: 98.7622% ( 1) 00:12:49.860 13.369 - 13.464: 98.7781% ( 2) 00:12:49.861 13.464 - 13.559: 98.7860% ( 1) 00:12:49.861 13.559 - 13.653: 98.7939% ( 1) 00:12:49.861 13.653 - 13.748: 98.8177% ( 3) 00:12:49.861 13.843 - 13.938: 98.8257% ( 1) 00:12:49.861 14.127 - 14.222: 98.8415% ( 2) 00:12:49.861 14.222 - 14.317: 98.8495% ( 1) 00:12:49.861 14.317 - 14.412: 98.8574% ( 1) 00:12:49.861 14.601 - 14.696: 98.8733% ( 2) 00:12:49.861 14.791 - 14.886: 98.8812% ( 1) 00:12:49.861 17.161 - 17.256: 98.8971% ( 2) 00:12:49.861 17.256 - 17.351: 98.9050% ( 1) 00:12:49.861 17.351 - 17.446: 98.9209% ( 2) 00:12:49.861 17.446 - 17.541: 98.9606% ( 5) 00:12:49.861 17.541 - 17.636: 99.0002% ( 5) 00:12:49.861 17.636 - 17.730: 99.0320% ( 4) 00:12:49.861 17.730 - 17.825: 99.0558% ( 3) 00:12:49.861 17.825 - 17.920: 99.1113% ( 7) 00:12:49.861 17.920 - 18.015: 99.1669% ( 7) 00:12:49.861 18.015 - 18.110: 99.2065% ( 5) 00:12:49.861 18.110 - 18.204: 99.3256% ( 15) 00:12:49.861 18.204 - 18.299: 99.4049% ( 10) 00:12:49.861 18.299 - 18.394: 99.4366% ( 4) 00:12:49.861 18.394 - 18.489: 99.5239% ( 11) 00:12:49.861 18.489 - 18.584: 99.5477% ( 3) 00:12:49.861 18.584 - 18.679: 99.6033% ( 7) 00:12:49.861 18.679 - 18.773: 99.6747% ( 9) 00:12:49.861 18.773 - 18.868: 99.6985% ( 3) 00:12:49.861 18.868 - 18.963: 99.7064% ( 1) 00:12:49.861 18.963 - 19.058: 99.7144% ( 1) 00:12:49.861 19.058 - 19.153: 99.7302% ( 2) 00:12:49.861 19.153 - 19.247: 99.7461% ( 2) 00:12:49.861 19.342 - 19.437: 99.7699% ( 3) 00:12:49.861 19.437 - 19.532: 99.7778% ( 1) 00:12:49.861 19.532 - 19.627: 99.7858% ( 1) 00:12:49.861 19.627 - 19.721: 99.7937% ( 1) 00:12:49.861 19.721 - 19.816: 99.8096% ( 2) 00:12:49.861 20.006 - 20.101: 99.8175% ( 1) 00:12:49.861 20.101 - 20.196: 99.8254% ( 1) 00:12:49.861 20.196 - 20.290: 99.8334% ( 1) 00:12:49.861 20.575 - 20.670: 99.8413% ( 1) 00:12:49.861 21.523 - 21.618: 99.8572% ( 2) 00:12:49.861 22.471 - 22.566: 99.8651% ( 1) 00:12:49.861 22.850 - 22.945: 99.8730% ( 1) 00:12:49.861 23.230 - 23.324: 99.8810% ( 1) 00:12:49.861 23.419 - 23.514: 99.8889% ( 1) 00:12:49.861 23.988 - 24.083: 99.8968% ( 1) 00:12:49.861 26.359 - 26.548: 99.9048% ( 1) 00:12:49.861 29.013 - 29.203: 99.9127% ( 1) 00:12:49.861 3980.705 - 4004.978: 99.9762% ( 8) 00:12:49.861 4004.978 - 4029.250: 100.0000% ( 3) 00:12:49.861 00:12:49.861 Complete histogram 00:12:49.861 ================== 00:12:49.861 Range in us Cumulative Count 00:12:49.861 2.062 - 2.074: 5.3559% ( 675) 00:12:49.861 2.074 - 2.086: 30.8736% ( 3216) 00:12:49.861 2.086 - 2.098: 36.8008% ( 747) 00:12:49.861 2.098 - 2.110: 46.4175% ( 1212) 00:12:49.861 2.110 - 2.121: 58.9622% ( 1581) 00:12:49.861 2.121 - 2.133: 60.9855% ( 255) 00:12:49.861 2.133 - 2.145: 66.6746% ( 717) 00:12:49.861 2.145 - 2.157: 73.1810% ( 820) 00:12:49.861 2.157 - 2.169: 74.4743% ( 163) 00:12:49.861 2.169 - 2.181: 78.6559% ( 527) 00:12:49.861 2.181 - 2.193: 81.4965% ( 358) 00:12:49.861 2.193 - 2.204: 82.1312% ( 80) 00:12:49.861 2.204 - 2.216: 83.7737% ( 207) 00:12:49.861 2.216 - 2.228: 86.9237% ( 397) 00:12:49.861 2.228 - 2.240: 88.9947% ( 261) 00:12:49.861 2.240 - 2.252: 91.1212% ( 268) 00:12:49.861 2.252 - 2.264: 92.5256% ( 177) 00:12:49.861 2.264 - 2.276: 92.7795% ( 32) 00:12:49.861 2.276 - 2.287: 93.2000% ( 53) 00:12:49.861 2.287 - 2.299: 93.6206% ( 53) 00:12:49.861 2.299 - 2.311: 94.2553% ( 80) 00:12:49.861 2.311 - 2.323: 94.5251% ( 34) 00:12:49.861 2.323 - 2.335: 94.5965% ( 9) 00:12:49.861 2.335 - 2.347: 94.6600% ( 8) 00:12:49.861 2.347 - 2.359: 94.7473% ( 11) 00:12:49.861 2.359 - 2.370: 94.8504% ( 13) 00:12:49.861 2.370 - 2.382: 95.1281% ( 35) 00:12:49.861 2.382 - 2.394: 95.6280% ( 63) 00:12:49.861 2.394 - 2.406: 96.1120% ( 61) 00:12:49.861 2.406 - 2.418: 96.3659% ( 32) 00:12:49.861 2.418 - 2.430: 96.6040% ( 30) 00:12:49.861 2.430 - 2.441: 96.7389% ( 17) 00:12:49.861 2.441 - 2.453: 96.9293% ( 24) 00:12:49.861 2.453 - 2.465: 97.0721% ( 18) 00:12:49.861 2.465 - 2.477: 97.2546% ( 23) 00:12:49.861 2.477 - 2.489: 97.3816% ( 16) 00:12:49.861 2.489 - 2.501: 97.4689% ( 11) 00:12:49.861 2.501 - 2.513: 97.5482% ( 10) 00:12:49.861 2.513 - 2.524: 97.6514% ( 13) 00:12:49.861 2.524 - 2.536: 97.6752% ( 3) 00:12:49.861 2.536 - 2.548: 97.7069% ( 4) 00:12:49.861 2.548 - 2.560: 97.7545% ( 6) 00:12:49.861 2.560 - 2.572: 97.8100% ( 7) 00:12:49.861 2.572 - 2.584: 97.8338% ( 3) 00:12:49.861 2.584 - 2.596: 97.8497% ( 2) 00:12:49.861 2.596 - 2.607: 97.8735% ( 3) 00:12:49.861 2.607 - 2.619: 97.9132% ( 5) 00:12:49.861 2.619 - 2.631: 97.9291% ( 2) 00:12:49.861 2.631 - 2.643: 97.9529% ( 3) 00:12:49.861 2.643 - 2.655: 97.9687% ( 2) 00:12:49.861 2.655 - 2.667: 97.9767% ( 1) 00:12:49.861 2.667 - 2.679: 98.0005% ( 3) 00:12:49.861 2.679 - 2.690: 98.0163% ( 2) 00:12:49.861 2.690 - 2.702: 98.0322% ( 2) 00:12:49.861 2.714 - 2.726: 98.0401% ( 1) 00:12:49.861 2.726 - 2.738: 98.0560% ( 2) 00:12:49.861 2.738 - 2.750: 98.0719% ( 2) 00:12:49.861 2.750 - 2.761: 98.0878% ( 2) 00:12:49.861 2.761 - 2.773: 98.0957% ( 1) 00:12:49.861 2.773 - 2.785: 98.1354% ( 5) 00:12:49.861 2.785 - 2.797: 98.1592% ( 3) 00:12:49.861 2.797 - 2.809: 98.1671% ( 1) 00:12:49.861 2.809 - 2.821: 98.1750% ( 1) 00:12:49.861 2.821 - 2.833: 98.1909% ( 2) 00:12:49.861 2.833 - 2.844: 98.2068% ( 2) 00:12:49.861 2.844 - 2.856: 98.2147% ( 1) 00:12:49.861 2.856 - 2.868: 98.2226% ( 1) 00:12:49.861 2.892 - 2.904: 98.2385% ( 2) 00:12:49.861 2.904 - 2.916: 98.2464% ( 1) 00:12:49.861 2.916 - 2.927: 98.2544% ( 1) 00:12:49.861 2.927 - 2.939: 98.2623% ( 1) 00:12:49.861 2.939 - 2.951: 98.2782% ( 2) 00:12:49.861 2.951 - 2.963: 98.2861% ( 1) 00:12:49.861 2.963 - 2.975: 98.3020% ( 2) 00:12:49.861 2.975 - 2.987: 98.3179% ( 2) 00:12:49.861 2.999 - 3.010: 98.3258% ( 1) 00:12:49.861 3.010 - 3.022: 98.3337% ( 1) 00:12:49.861 3.022 - 3.034: 98.3417% ( 1) 00:12:49.861 3.034 - 3.058: 98.3655% ( 3) 00:12:49.861 3.058 - 3.081: 98.3734% ( 1) 00:12:49.861 3.081 - 3.105: 98.4051% ( 4) 00:12:49.861 3.105 - 3.129: 98.4131% ( 1) 00:12:49.861 3.129 - 3.153: 98.4210% ( 1) 00:12:49.861 3.176 - 3.200: 98.4369% ( 2) 00:12:49.861 3.224 - 3.247: 98.4448% ( 1) 00:12:49.861 3.247 - 3.271: 98.4527% ( 1) 00:12:49.861 3.271 - 3.295: 98.4686% ( 2) 00:12:49.861 3.295 - 3.319: 98.4766% ( 1) 00:12:49.861 3.342 - 3.366: 98.4845% ( 1) 00:12:49.861 3.390 - 3.413: 98.4924% ( 1) 00:12:49.861 3.437 - 3.461: 98.5004% ( 1) 00:12:49.861 3.461 - 3.484: 98.5083% ( 1) 00:12:49.861 3.508 - 3.532: 98.5162% ( 1) 00:12:49.861 3.556 - 3.579: 98.5242% ( 1) 00:12:49.861 3.579 - 3.603: 98.5321% ( 1) 00:12:49.861 3.603 - 3.627: 98.5480% ( 2) 00:12:49.861 3.674 - 3.698: 98.5559% ( 1) 00:12:49.861 3.698 - 3.721: 98.5638% ( 1) 00:12:49.861 3.793 - 3.816: 98.5718% ( 1) 00:12:49.861 3.816 - 3.840: 98.5797% ( 1) 00:12:49.861 3.959 - 3.982: 98.5876% ( 1) 00:12:49.861 3.982 - 4.006: 98.5956% ( 1) 00:12:49.861 4.101 - 4.124: 98.6035% ( 1) 00:12:49.861 4.480 - 4.504: 98.6114% ( 1) 00:12:49.861 4.622 - 4.646: 98.6194% ( 1) 00:12:49.861 5.641 - 5.665: 98.6273% ( 1) 00:12:49.861 5.736 - 5.760: 98.6352% ( 1) 00:12:49.861 5.831 - 5.855: 98.6432% ( 1) 00:12:49.861 6.163 - 6.210: 98.6590% ( 2) 00:12:49.861 6.495 - 6.542: 98.6670% ( 1) 00:12:49.861 6.590 - 6.637: 98.6749% ( 1) 00:12:49.861 6.827 - 6.874: 98.6829% ( 1) 00:12:49.861 6.874 - 6.921: 98.6908% ( 1) 00:12:49.861 8.154 - 8.201: 98.6987% ( 1) 00:12:49.861 8.676 - 8.723: 98.7067% ( 1) 00:12:49.861 9.102 - 9.150: 98.7146% ( 1) 00:12:49.861 9.719 - 9.766: 98.7225% ( 1) 00:12:49.861 9.861 - 9.908: 98.7305% ( 1) 00:12:49.861 11.330 - 11.378: 98.7384% ( 1) 00:12:49.861 15.739 - 15.834: 98.7463% ( 1) 00:12:49.861 15.834 - 15.929: 98.7622% ( 2) 00:12:49.861 15.929 - 16.024: 98.7939% ( 4) 00:12:49.861 16.024 - 16.119: 98.8257% ( 4) 00:12:49.861 16.213 - 16.308: 98.8415% ( 2) 00:12:49.861 16.308 - 16.403: 98.8812% ( 5) 00:12:49.861 16.403 - 16.498: 98.9130% ( 4) 00:12:49.861 16.498 - 16.593: 98.9923% ( 10) 00:12:49.861 16.593 - 16.687: 99.0558% ( 8) 00:12:49.861 16.687 - 16.782: 99.0955% ( 5) 00:12:49.861 16.782 - 16.877: 99.1113% ( 2) 00:12:49.861 16.877 - 16.972: 99.1431% ( 4) 00:12:49.861 16.972 - 17.067: 99.1827% ( 5) 00:12:49.861 17.067 - 17.161: 99.1907% ( 1) 00:12:49.861 17.161 - 17.256: 99.1986% ( 1) 00:12:49.861 17.256 - 17.351: 99.2224% ( 3) 00:12:49.861 17.351 - 17.446: 99.2383% ( 2) 00:12:49.861 17.446 - 17.541: 99.2462% ( 1) 00:12:49.861 17.541 - 17.636: 99.2541% ( 1) 00:12:49.861 17.636 - 17.730: 99.2621% ( 1) 00:12:49.862 17.730 - 17.825: 99.2700% ( 1) 00:12:49.862 17.920 - 18.015: 99.2779% ( 1) 00:12:49.862 18.015 - 18.110: 99.2859% ( 1) 00:12:49.862 18.299 - 18.394: 99.2938% ( 1) 00:12:49.862 18.489 - 18.584: 99.3018% ( 1) 00:12:49.862 18.679 - 18.773: 99.3176% ( 2) 00:12:49.862 19.058 - 19.153: 99.3256% ( 1) 00:12:49.862 20.290 - 20.385: 99.3335% ( 1) 00:12:49.862 21.428 - 21.523: 99.3414%[2024-12-11 14:49:32.619885] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:50.120 ( 1) 00:12:50.120 3021.938 - 3034.074: 99.3573% ( 2) 00:12:50.120 3070.483 - 3082.619: 99.3652% ( 1) 00:12:50.120 3980.705 - 4004.978: 99.7144% ( 44) 00:12:50.120 4004.978 - 4029.250: 99.9921% ( 35) 00:12:50.120 5000.154 - 5024.427: 100.0000% ( 1) 00:12:50.120 00:12:50.120 14:49:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:12:50.120 14:49:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:50.120 14:49:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:12:50.120 14:49:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:12:50.120 14:49:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:50.377 [ 00:12:50.377 { 00:12:50.377 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:50.377 "subtype": "Discovery", 00:12:50.377 "listen_addresses": [], 00:12:50.377 "allow_any_host": true, 00:12:50.377 "hosts": [] 00:12:50.377 }, 00:12:50.377 { 00:12:50.377 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:50.377 "subtype": "NVMe", 00:12:50.377 "listen_addresses": [ 00:12:50.377 { 00:12:50.377 "trtype": "VFIOUSER", 00:12:50.377 "adrfam": "IPv4", 00:12:50.377 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:50.377 "trsvcid": "0" 00:12:50.377 } 00:12:50.377 ], 00:12:50.377 "allow_any_host": true, 00:12:50.377 "hosts": [], 00:12:50.377 "serial_number": "SPDK1", 00:12:50.377 "model_number": "SPDK bdev Controller", 00:12:50.377 "max_namespaces": 32, 00:12:50.377 "min_cntlid": 1, 00:12:50.377 "max_cntlid": 65519, 00:12:50.377 "namespaces": [ 00:12:50.377 { 00:12:50.377 "nsid": 1, 00:12:50.377 "bdev_name": "Malloc1", 00:12:50.377 "name": "Malloc1", 00:12:50.377 "nguid": "FCCE59DA9B7246D6B3C68DEA2C9732CF", 00:12:50.377 "uuid": "fcce59da-9b72-46d6-b3c6-8dea2c9732cf" 00:12:50.377 } 00:12:50.377 ] 00:12:50.377 }, 00:12:50.377 { 00:12:50.377 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:50.377 "subtype": "NVMe", 00:12:50.377 "listen_addresses": [ 00:12:50.377 { 00:12:50.377 "trtype": "VFIOUSER", 00:12:50.377 "adrfam": "IPv4", 00:12:50.377 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:50.377 "trsvcid": "0" 00:12:50.377 } 00:12:50.377 ], 00:12:50.377 "allow_any_host": true, 00:12:50.377 "hosts": [], 00:12:50.377 "serial_number": "SPDK2", 00:12:50.377 "model_number": "SPDK bdev Controller", 00:12:50.377 "max_namespaces": 32, 00:12:50.377 "min_cntlid": 1, 00:12:50.377 "max_cntlid": 65519, 00:12:50.377 "namespaces": [ 00:12:50.377 { 00:12:50.377 "nsid": 1, 00:12:50.377 "bdev_name": "Malloc2", 00:12:50.377 "name": "Malloc2", 00:12:50.377 "nguid": "FBE172E647464DC299198F5F92819C61", 00:12:50.377 "uuid": "fbe172e6-4746-4dc2-9919-8f5f92819c61" 00:12:50.377 } 00:12:50.377 ] 00:12:50.377 } 00:12:50.377 ] 00:12:50.377 14:49:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:12:50.377 14:49:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=646745 00:12:50.377 14:49:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:12:50.377 14:49:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:12:50.377 14:49:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:12:50.377 14:49:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:50.377 14:49:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:50.377 14:49:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:12:50.377 14:49:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:12:50.377 14:49:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:12:50.377 [2024-12-11 14:49:33.129113] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:50.635 Malloc3 00:12:50.635 14:49:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:12:50.893 [2024-12-11 14:49:33.530256] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:50.893 14:49:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:50.893 Asynchronous Event Request test 00:12:50.893 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:50.893 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:50.893 Registering asynchronous event callbacks... 00:12:50.893 Starting namespace attribute notice tests for all controllers... 00:12:50.893 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:12:50.893 aer_cb - Changed Namespace 00:12:50.893 Cleaning up... 00:12:51.152 [ 00:12:51.152 { 00:12:51.152 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:51.152 "subtype": "Discovery", 00:12:51.152 "listen_addresses": [], 00:12:51.152 "allow_any_host": true, 00:12:51.152 "hosts": [] 00:12:51.152 }, 00:12:51.152 { 00:12:51.152 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:51.152 "subtype": "NVMe", 00:12:51.152 "listen_addresses": [ 00:12:51.152 { 00:12:51.152 "trtype": "VFIOUSER", 00:12:51.152 "adrfam": "IPv4", 00:12:51.152 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:51.152 "trsvcid": "0" 00:12:51.152 } 00:12:51.152 ], 00:12:51.152 "allow_any_host": true, 00:12:51.152 "hosts": [], 00:12:51.152 "serial_number": "SPDK1", 00:12:51.152 "model_number": "SPDK bdev Controller", 00:12:51.152 "max_namespaces": 32, 00:12:51.152 "min_cntlid": 1, 00:12:51.152 "max_cntlid": 65519, 00:12:51.152 "namespaces": [ 00:12:51.152 { 00:12:51.152 "nsid": 1, 00:12:51.152 "bdev_name": "Malloc1", 00:12:51.152 "name": "Malloc1", 00:12:51.152 "nguid": "FCCE59DA9B7246D6B3C68DEA2C9732CF", 00:12:51.152 "uuid": "fcce59da-9b72-46d6-b3c6-8dea2c9732cf" 00:12:51.152 }, 00:12:51.152 { 00:12:51.152 "nsid": 2, 00:12:51.152 "bdev_name": "Malloc3", 00:12:51.152 "name": "Malloc3", 00:12:51.152 "nguid": "A1CEC322A166466C846767798DBAD457", 00:12:51.152 "uuid": "a1cec322-a166-466c-8467-67798dbad457" 00:12:51.152 } 00:12:51.152 ] 00:12:51.152 }, 00:12:51.152 { 00:12:51.152 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:51.152 "subtype": "NVMe", 00:12:51.152 "listen_addresses": [ 00:12:51.152 { 00:12:51.152 "trtype": "VFIOUSER", 00:12:51.152 "adrfam": "IPv4", 00:12:51.152 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:51.152 "trsvcid": "0" 00:12:51.152 } 00:12:51.152 ], 00:12:51.152 "allow_any_host": true, 00:12:51.152 "hosts": [], 00:12:51.152 "serial_number": "SPDK2", 00:12:51.152 "model_number": "SPDK bdev Controller", 00:12:51.152 "max_namespaces": 32, 00:12:51.152 "min_cntlid": 1, 00:12:51.152 "max_cntlid": 65519, 00:12:51.152 "namespaces": [ 00:12:51.152 { 00:12:51.152 "nsid": 1, 00:12:51.152 "bdev_name": "Malloc2", 00:12:51.152 "name": "Malloc2", 00:12:51.152 "nguid": "FBE172E647464DC299198F5F92819C61", 00:12:51.152 "uuid": "fbe172e6-4746-4dc2-9919-8f5f92819c61" 00:12:51.152 } 00:12:51.152 ] 00:12:51.152 } 00:12:51.152 ] 00:12:51.152 14:49:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 646745 00:12:51.152 14:49:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:51.153 14:49:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:12:51.153 14:49:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:12:51.153 14:49:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:51.153 [2024-12-11 14:49:33.836665] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:12:51.153 [2024-12-11 14:49:33.836710] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid646875 ] 00:12:51.153 [2024-12-11 14:49:33.888606] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:12:51.153 [2024-12-11 14:49:33.891009] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:51.153 [2024-12-11 14:49:33.891044] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f50381b0000 00:12:51.153 [2024-12-11 14:49:33.892008] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:51.153 [2024-12-11 14:49:33.893016] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:51.153 [2024-12-11 14:49:33.894017] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:51.153 [2024-12-11 14:49:33.895019] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:51.153 [2024-12-11 14:49:33.896022] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:51.153 [2024-12-11 14:49:33.897034] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:51.153 [2024-12-11 14:49:33.898039] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:51.153 [2024-12-11 14:49:33.899043] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:51.153 [2024-12-11 14:49:33.900054] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:51.153 [2024-12-11 14:49:33.900076] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f50381a5000 00:12:51.153 [2024-12-11 14:49:33.901191] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:51.153 [2024-12-11 14:49:33.918761] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:12:51.153 [2024-12-11 14:49:33.918801] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:12:51.153 [2024-12-11 14:49:33.920938] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:51.153 [2024-12-11 14:49:33.921000] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:51.153 [2024-12-11 14:49:33.921105] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:12:51.153 [2024-12-11 14:49:33.921133] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:12:51.153 [2024-12-11 14:49:33.921146] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:12:51.153 [2024-12-11 14:49:33.921945] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:12:51.153 [2024-12-11 14:49:33.921971] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:12:51.153 [2024-12-11 14:49:33.921985] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:12:51.153 [2024-12-11 14:49:33.922949] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:51.153 [2024-12-11 14:49:33.922972] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:12:51.153 [2024-12-11 14:49:33.922987] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:12:51.153 [2024-12-11 14:49:33.923957] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:12:51.153 [2024-12-11 14:49:33.923982] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:51.413 [2024-12-11 14:49:33.924944] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:12:51.413 [2024-12-11 14:49:33.924966] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:12:51.413 [2024-12-11 14:49:33.924976] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:12:51.413 [2024-12-11 14:49:33.924987] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:51.413 [2024-12-11 14:49:33.925097] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:12:51.413 [2024-12-11 14:49:33.925105] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:51.413 [2024-12-11 14:49:33.925118] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:12:51.413 [2024-12-11 14:49:33.925948] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:12:51.413 [2024-12-11 14:49:33.926973] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:12:51.413 [2024-12-11 14:49:33.927971] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:51.413 [2024-12-11 14:49:33.928962] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:51.413 [2024-12-11 14:49:33.929049] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:51.413 [2024-12-11 14:49:33.929975] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:12:51.413 [2024-12-11 14:49:33.929997] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:51.413 [2024-12-11 14:49:33.930007] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:12:51.413 [2024-12-11 14:49:33.930032] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:12:51.413 [2024-12-11 14:49:33.930047] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:12:51.413 [2024-12-11 14:49:33.930069] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:51.413 [2024-12-11 14:49:33.930079] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:51.413 [2024-12-11 14:49:33.930085] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:51.413 [2024-12-11 14:49:33.930104] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:51.413 [2024-12-11 14:49:33.940562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:51.413 [2024-12-11 14:49:33.940586] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:12:51.413 [2024-12-11 14:49:33.940596] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:12:51.413 [2024-12-11 14:49:33.940603] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:12:51.413 [2024-12-11 14:49:33.940611] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:51.413 [2024-12-11 14:49:33.940619] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:12:51.413 [2024-12-11 14:49:33.940627] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:12:51.413 [2024-12-11 14:49:33.940635] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:12:51.413 [2024-12-11 14:49:33.940651] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:12:51.413 [2024-12-11 14:49:33.940671] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:51.413 [2024-12-11 14:49:33.948559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:51.413 [2024-12-11 14:49:33.948594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:51.413 [2024-12-11 14:49:33.948607] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:51.413 [2024-12-11 14:49:33.948619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:51.413 [2024-12-11 14:49:33.948631] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:51.413 [2024-12-11 14:49:33.948640] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:12:51.413 [2024-12-11 14:49:33.948657] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:51.413 [2024-12-11 14:49:33.948672] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:51.413 [2024-12-11 14:49:33.956560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:51.413 [2024-12-11 14:49:33.956579] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:12:51.414 [2024-12-11 14:49:33.956588] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:51.414 [2024-12-11 14:49:33.956600] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:12:51.414 [2024-12-11 14:49:33.956610] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:12:51.414 [2024-12-11 14:49:33.956624] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:51.414 [2024-12-11 14:49:33.964573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:51.414 [2024-12-11 14:49:33.964650] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:12:51.414 [2024-12-11 14:49:33.964674] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:12:51.414 [2024-12-11 14:49:33.964689] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:51.414 [2024-12-11 14:49:33.964698] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:51.414 [2024-12-11 14:49:33.964703] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:51.414 [2024-12-11 14:49:33.964713] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:51.414 [2024-12-11 14:49:33.972573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:51.414 [2024-12-11 14:49:33.972597] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:12:51.414 [2024-12-11 14:49:33.972618] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:12:51.414 [2024-12-11 14:49:33.972634] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:12:51.414 [2024-12-11 14:49:33.972651] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:51.414 [2024-12-11 14:49:33.972660] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:51.414 [2024-12-11 14:49:33.972666] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:51.414 [2024-12-11 14:49:33.972676] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:51.414 [2024-12-11 14:49:33.980573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:51.414 [2024-12-11 14:49:33.980604] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:51.414 [2024-12-11 14:49:33.980621] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:51.414 [2024-12-11 14:49:33.980635] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:51.414 [2024-12-11 14:49:33.980644] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:51.414 [2024-12-11 14:49:33.980650] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:51.414 [2024-12-11 14:49:33.980659] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:51.414 [2024-12-11 14:49:33.988577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:51.414 [2024-12-11 14:49:33.988598] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:51.414 [2024-12-11 14:49:33.988612] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:12:51.414 [2024-12-11 14:49:33.988628] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:12:51.414 [2024-12-11 14:49:33.988639] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:12:51.414 [2024-12-11 14:49:33.988648] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:51.414 [2024-12-11 14:49:33.988656] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:12:51.414 [2024-12-11 14:49:33.988664] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:12:51.414 [2024-12-11 14:49:33.988672] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:12:51.414 [2024-12-11 14:49:33.988680] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:12:51.414 [2024-12-11 14:49:33.988709] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:51.414 [2024-12-11 14:49:33.996559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:51.414 [2024-12-11 14:49:33.996585] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:51.414 [2024-12-11 14:49:34.004557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:51.414 [2024-12-11 14:49:34.004588] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:51.414 [2024-12-11 14:49:34.012562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:51.414 [2024-12-11 14:49:34.012589] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:51.414 [2024-12-11 14:49:34.020572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:51.414 [2024-12-11 14:49:34.020606] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:51.414 [2024-12-11 14:49:34.020618] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:51.414 [2024-12-11 14:49:34.020624] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:51.414 [2024-12-11 14:49:34.020630] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:51.414 [2024-12-11 14:49:34.020635] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:12:51.414 [2024-12-11 14:49:34.020645] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:51.414 [2024-12-11 14:49:34.020657] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:51.414 [2024-12-11 14:49:34.020665] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:51.414 [2024-12-11 14:49:34.020671] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:51.414 [2024-12-11 14:49:34.020680] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:51.414 [2024-12-11 14:49:34.020691] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:51.414 [2024-12-11 14:49:34.020699] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:51.414 [2024-12-11 14:49:34.020705] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:51.414 [2024-12-11 14:49:34.020714] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:51.414 [2024-12-11 14:49:34.020726] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:51.414 [2024-12-11 14:49:34.020734] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:51.414 [2024-12-11 14:49:34.020740] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:51.414 [2024-12-11 14:49:34.020749] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:51.414 [2024-12-11 14:49:34.028573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:51.414 [2024-12-11 14:49:34.028610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:51.414 [2024-12-11 14:49:34.028628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:51.414 [2024-12-11 14:49:34.028641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:51.414 ===================================================== 00:12:51.414 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:51.414 ===================================================== 00:12:51.414 Controller Capabilities/Features 00:12:51.414 ================================ 00:12:51.414 Vendor ID: 4e58 00:12:51.414 Subsystem Vendor ID: 4e58 00:12:51.414 Serial Number: SPDK2 00:12:51.414 Model Number: SPDK bdev Controller 00:12:51.414 Firmware Version: 25.01 00:12:51.414 Recommended Arb Burst: 6 00:12:51.414 IEEE OUI Identifier: 8d 6b 50 00:12:51.414 Multi-path I/O 00:12:51.414 May have multiple subsystem ports: Yes 00:12:51.414 May have multiple controllers: Yes 00:12:51.414 Associated with SR-IOV VF: No 00:12:51.414 Max Data Transfer Size: 131072 00:12:51.414 Max Number of Namespaces: 32 00:12:51.414 Max Number of I/O Queues: 127 00:12:51.414 NVMe Specification Version (VS): 1.3 00:12:51.414 NVMe Specification Version (Identify): 1.3 00:12:51.414 Maximum Queue Entries: 256 00:12:51.414 Contiguous Queues Required: Yes 00:12:51.414 Arbitration Mechanisms Supported 00:12:51.414 Weighted Round Robin: Not Supported 00:12:51.414 Vendor Specific: Not Supported 00:12:51.414 Reset Timeout: 15000 ms 00:12:51.414 Doorbell Stride: 4 bytes 00:12:51.414 NVM Subsystem Reset: Not Supported 00:12:51.414 Command Sets Supported 00:12:51.414 NVM Command Set: Supported 00:12:51.414 Boot Partition: Not Supported 00:12:51.414 Memory Page Size Minimum: 4096 bytes 00:12:51.414 Memory Page Size Maximum: 4096 bytes 00:12:51.414 Persistent Memory Region: Not Supported 00:12:51.414 Optional Asynchronous Events Supported 00:12:51.414 Namespace Attribute Notices: Supported 00:12:51.414 Firmware Activation Notices: Not Supported 00:12:51.414 ANA Change Notices: Not Supported 00:12:51.414 PLE Aggregate Log Change Notices: Not Supported 00:12:51.414 LBA Status Info Alert Notices: Not Supported 00:12:51.414 EGE Aggregate Log Change Notices: Not Supported 00:12:51.414 Normal NVM Subsystem Shutdown event: Not Supported 00:12:51.414 Zone Descriptor Change Notices: Not Supported 00:12:51.415 Discovery Log Change Notices: Not Supported 00:12:51.415 Controller Attributes 00:12:51.415 128-bit Host Identifier: Supported 00:12:51.415 Non-Operational Permissive Mode: Not Supported 00:12:51.415 NVM Sets: Not Supported 00:12:51.415 Read Recovery Levels: Not Supported 00:12:51.415 Endurance Groups: Not Supported 00:12:51.415 Predictable Latency Mode: Not Supported 00:12:51.415 Traffic Based Keep ALive: Not Supported 00:12:51.415 Namespace Granularity: Not Supported 00:12:51.415 SQ Associations: Not Supported 00:12:51.415 UUID List: Not Supported 00:12:51.415 Multi-Domain Subsystem: Not Supported 00:12:51.415 Fixed Capacity Management: Not Supported 00:12:51.415 Variable Capacity Management: Not Supported 00:12:51.415 Delete Endurance Group: Not Supported 00:12:51.415 Delete NVM Set: Not Supported 00:12:51.415 Extended LBA Formats Supported: Not Supported 00:12:51.415 Flexible Data Placement Supported: Not Supported 00:12:51.415 00:12:51.415 Controller Memory Buffer Support 00:12:51.415 ================================ 00:12:51.415 Supported: No 00:12:51.415 00:12:51.415 Persistent Memory Region Support 00:12:51.415 ================================ 00:12:51.415 Supported: No 00:12:51.415 00:12:51.415 Admin Command Set Attributes 00:12:51.415 ============================ 00:12:51.415 Security Send/Receive: Not Supported 00:12:51.415 Format NVM: Not Supported 00:12:51.415 Firmware Activate/Download: Not Supported 00:12:51.415 Namespace Management: Not Supported 00:12:51.415 Device Self-Test: Not Supported 00:12:51.415 Directives: Not Supported 00:12:51.415 NVMe-MI: Not Supported 00:12:51.415 Virtualization Management: Not Supported 00:12:51.415 Doorbell Buffer Config: Not Supported 00:12:51.415 Get LBA Status Capability: Not Supported 00:12:51.415 Command & Feature Lockdown Capability: Not Supported 00:12:51.415 Abort Command Limit: 4 00:12:51.415 Async Event Request Limit: 4 00:12:51.415 Number of Firmware Slots: N/A 00:12:51.415 Firmware Slot 1 Read-Only: N/A 00:12:51.415 Firmware Activation Without Reset: N/A 00:12:51.415 Multiple Update Detection Support: N/A 00:12:51.415 Firmware Update Granularity: No Information Provided 00:12:51.415 Per-Namespace SMART Log: No 00:12:51.415 Asymmetric Namespace Access Log Page: Not Supported 00:12:51.415 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:12:51.415 Command Effects Log Page: Supported 00:12:51.415 Get Log Page Extended Data: Supported 00:12:51.415 Telemetry Log Pages: Not Supported 00:12:51.415 Persistent Event Log Pages: Not Supported 00:12:51.415 Supported Log Pages Log Page: May Support 00:12:51.415 Commands Supported & Effects Log Page: Not Supported 00:12:51.415 Feature Identifiers & Effects Log Page:May Support 00:12:51.415 NVMe-MI Commands & Effects Log Page: May Support 00:12:51.415 Data Area 4 for Telemetry Log: Not Supported 00:12:51.415 Error Log Page Entries Supported: 128 00:12:51.415 Keep Alive: Supported 00:12:51.415 Keep Alive Granularity: 10000 ms 00:12:51.415 00:12:51.415 NVM Command Set Attributes 00:12:51.415 ========================== 00:12:51.415 Submission Queue Entry Size 00:12:51.415 Max: 64 00:12:51.415 Min: 64 00:12:51.415 Completion Queue Entry Size 00:12:51.415 Max: 16 00:12:51.415 Min: 16 00:12:51.415 Number of Namespaces: 32 00:12:51.415 Compare Command: Supported 00:12:51.415 Write Uncorrectable Command: Not Supported 00:12:51.415 Dataset Management Command: Supported 00:12:51.415 Write Zeroes Command: Supported 00:12:51.415 Set Features Save Field: Not Supported 00:12:51.415 Reservations: Not Supported 00:12:51.415 Timestamp: Not Supported 00:12:51.415 Copy: Supported 00:12:51.415 Volatile Write Cache: Present 00:12:51.415 Atomic Write Unit (Normal): 1 00:12:51.415 Atomic Write Unit (PFail): 1 00:12:51.415 Atomic Compare & Write Unit: 1 00:12:51.415 Fused Compare & Write: Supported 00:12:51.415 Scatter-Gather List 00:12:51.415 SGL Command Set: Supported (Dword aligned) 00:12:51.415 SGL Keyed: Not Supported 00:12:51.415 SGL Bit Bucket Descriptor: Not Supported 00:12:51.415 SGL Metadata Pointer: Not Supported 00:12:51.415 Oversized SGL: Not Supported 00:12:51.415 SGL Metadata Address: Not Supported 00:12:51.415 SGL Offset: Not Supported 00:12:51.415 Transport SGL Data Block: Not Supported 00:12:51.415 Replay Protected Memory Block: Not Supported 00:12:51.415 00:12:51.415 Firmware Slot Information 00:12:51.415 ========================= 00:12:51.415 Active slot: 1 00:12:51.415 Slot 1 Firmware Revision: 25.01 00:12:51.415 00:12:51.415 00:12:51.415 Commands Supported and Effects 00:12:51.415 ============================== 00:12:51.415 Admin Commands 00:12:51.415 -------------- 00:12:51.415 Get Log Page (02h): Supported 00:12:51.415 Identify (06h): Supported 00:12:51.415 Abort (08h): Supported 00:12:51.415 Set Features (09h): Supported 00:12:51.415 Get Features (0Ah): Supported 00:12:51.415 Asynchronous Event Request (0Ch): Supported 00:12:51.415 Keep Alive (18h): Supported 00:12:51.415 I/O Commands 00:12:51.415 ------------ 00:12:51.415 Flush (00h): Supported LBA-Change 00:12:51.415 Write (01h): Supported LBA-Change 00:12:51.415 Read (02h): Supported 00:12:51.415 Compare (05h): Supported 00:12:51.415 Write Zeroes (08h): Supported LBA-Change 00:12:51.415 Dataset Management (09h): Supported LBA-Change 00:12:51.415 Copy (19h): Supported LBA-Change 00:12:51.415 00:12:51.415 Error Log 00:12:51.415 ========= 00:12:51.415 00:12:51.415 Arbitration 00:12:51.415 =========== 00:12:51.415 Arbitration Burst: 1 00:12:51.415 00:12:51.415 Power Management 00:12:51.415 ================ 00:12:51.415 Number of Power States: 1 00:12:51.415 Current Power State: Power State #0 00:12:51.415 Power State #0: 00:12:51.415 Max Power: 0.00 W 00:12:51.415 Non-Operational State: Operational 00:12:51.415 Entry Latency: Not Reported 00:12:51.415 Exit Latency: Not Reported 00:12:51.415 Relative Read Throughput: 0 00:12:51.415 Relative Read Latency: 0 00:12:51.415 Relative Write Throughput: 0 00:12:51.415 Relative Write Latency: 0 00:12:51.415 Idle Power: Not Reported 00:12:51.415 Active Power: Not Reported 00:12:51.415 Non-Operational Permissive Mode: Not Supported 00:12:51.415 00:12:51.415 Health Information 00:12:51.415 ================== 00:12:51.415 Critical Warnings: 00:12:51.415 Available Spare Space: OK 00:12:51.415 Temperature: OK 00:12:51.415 Device Reliability: OK 00:12:51.415 Read Only: No 00:12:51.415 Volatile Memory Backup: OK 00:12:51.415 Current Temperature: 0 Kelvin (-273 Celsius) 00:12:51.415 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:51.415 Available Spare: 0% 00:12:51.415 Available Sp[2024-12-11 14:49:34.028764] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:51.415 [2024-12-11 14:49:34.036573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:51.415 [2024-12-11 14:49:34.036633] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:12:51.415 [2024-12-11 14:49:34.036654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:51.415 [2024-12-11 14:49:34.036666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:51.415 [2024-12-11 14:49:34.036676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:51.415 [2024-12-11 14:49:34.036686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:51.415 [2024-12-11 14:49:34.036752] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:51.415 [2024-12-11 14:49:34.036774] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:12:51.415 [2024-12-11 14:49:34.037759] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:51.415 [2024-12-11 14:49:34.037834] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:12:51.415 [2024-12-11 14:49:34.037849] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:12:51.415 [2024-12-11 14:49:34.038763] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:12:51.415 [2024-12-11 14:49:34.038788] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:12:51.415 [2024-12-11 14:49:34.038846] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:12:51.415 [2024-12-11 14:49:34.040039] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:51.415 are Threshold: 0% 00:12:51.415 Life Percentage Used: 0% 00:12:51.415 Data Units Read: 0 00:12:51.415 Data Units Written: 0 00:12:51.415 Host Read Commands: 0 00:12:51.415 Host Write Commands: 0 00:12:51.415 Controller Busy Time: 0 minutes 00:12:51.415 Power Cycles: 0 00:12:51.415 Power On Hours: 0 hours 00:12:51.416 Unsafe Shutdowns: 0 00:12:51.416 Unrecoverable Media Errors: 0 00:12:51.416 Lifetime Error Log Entries: 0 00:12:51.416 Warning Temperature Time: 0 minutes 00:12:51.416 Critical Temperature Time: 0 minutes 00:12:51.416 00:12:51.416 Number of Queues 00:12:51.416 ================ 00:12:51.416 Number of I/O Submission Queues: 127 00:12:51.416 Number of I/O Completion Queues: 127 00:12:51.416 00:12:51.416 Active Namespaces 00:12:51.416 ================= 00:12:51.416 Namespace ID:1 00:12:51.416 Error Recovery Timeout: Unlimited 00:12:51.416 Command Set Identifier: NVM (00h) 00:12:51.416 Deallocate: Supported 00:12:51.416 Deallocated/Unwritten Error: Not Supported 00:12:51.416 Deallocated Read Value: Unknown 00:12:51.416 Deallocate in Write Zeroes: Not Supported 00:12:51.416 Deallocated Guard Field: 0xFFFF 00:12:51.416 Flush: Supported 00:12:51.416 Reservation: Supported 00:12:51.416 Namespace Sharing Capabilities: Multiple Controllers 00:12:51.416 Size (in LBAs): 131072 (0GiB) 00:12:51.416 Capacity (in LBAs): 131072 (0GiB) 00:12:51.416 Utilization (in LBAs): 131072 (0GiB) 00:12:51.416 NGUID: FBE172E647464DC299198F5F92819C61 00:12:51.416 UUID: fbe172e6-4746-4dc2-9919-8f5f92819c61 00:12:51.416 Thin Provisioning: Not Supported 00:12:51.416 Per-NS Atomic Units: Yes 00:12:51.416 Atomic Boundary Size (Normal): 0 00:12:51.416 Atomic Boundary Size (PFail): 0 00:12:51.416 Atomic Boundary Offset: 0 00:12:51.416 Maximum Single Source Range Length: 65535 00:12:51.416 Maximum Copy Length: 65535 00:12:51.416 Maximum Source Range Count: 1 00:12:51.416 NGUID/EUI64 Never Reused: No 00:12:51.416 Namespace Write Protected: No 00:12:51.416 Number of LBA Formats: 1 00:12:51.416 Current LBA Format: LBA Format #00 00:12:51.416 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:51.416 00:12:51.416 14:49:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:51.674 [2024-12-11 14:49:34.287455] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:56.939 Initializing NVMe Controllers 00:12:56.939 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:56.939 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:56.939 Initialization complete. Launching workers. 00:12:56.939 ======================================================== 00:12:56.939 Latency(us) 00:12:56.939 Device Information : IOPS MiB/s Average min max 00:12:56.939 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 30977.20 121.00 4131.56 1228.87 10324.47 00:12:56.939 ======================================================== 00:12:56.939 Total : 30977.20 121.00 4131.56 1228.87 10324.47 00:12:56.939 00:12:56.939 [2024-12-11 14:49:39.391971] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:56.939 14:49:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:56.939 [2024-12-11 14:49:39.654702] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:02.265 Initializing NVMe Controllers 00:13:02.265 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:02.265 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:02.265 Initialization complete. Launching workers. 00:13:02.265 ======================================================== 00:13:02.265 Latency(us) 00:13:02.265 Device Information : IOPS MiB/s Average min max 00:13:02.265 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 29722.19 116.10 4305.84 1187.14 10447.09 00:13:02.265 ======================================================== 00:13:02.265 Total : 29722.19 116.10 4305.84 1187.14 10447.09 00:13:02.265 00:13:02.265 [2024-12-11 14:49:44.677080] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:02.265 14:49:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:02.265 [2024-12-11 14:49:44.912003] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:07.532 [2024-12-11 14:49:50.035717] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:07.532 Initializing NVMe Controllers 00:13:07.532 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:07.532 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:07.532 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:13:07.532 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:13:07.532 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:13:07.532 Initialization complete. Launching workers. 00:13:07.532 Starting thread on core 2 00:13:07.532 Starting thread on core 3 00:13:07.532 Starting thread on core 1 00:13:07.532 14:49:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:13:07.790 [2024-12-11 14:49:50.352082] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:11.073 [2024-12-11 14:49:53.416822] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:11.073 Initializing NVMe Controllers 00:13:11.073 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:11.073 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:11.073 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:13:11.073 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:13:11.073 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:13:11.073 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:13:11.073 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:13:11.073 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:11.073 Initialization complete. Launching workers. 00:13:11.073 Starting thread on core 1 with urgent priority queue 00:13:11.073 Starting thread on core 2 with urgent priority queue 00:13:11.073 Starting thread on core 3 with urgent priority queue 00:13:11.073 Starting thread on core 0 with urgent priority queue 00:13:11.073 SPDK bdev Controller (SPDK2 ) core 0: 2021.67 IO/s 49.46 secs/100000 ios 00:13:11.073 SPDK bdev Controller (SPDK2 ) core 1: 1822.33 IO/s 54.87 secs/100000 ios 00:13:11.073 SPDK bdev Controller (SPDK2 ) core 2: 2093.67 IO/s 47.76 secs/100000 ios 00:13:11.073 SPDK bdev Controller (SPDK2 ) core 3: 1900.00 IO/s 52.63 secs/100000 ios 00:13:11.073 ======================================================== 00:13:11.073 00:13:11.073 14:49:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:11.073 [2024-12-11 14:49:53.737067] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:11.073 Initializing NVMe Controllers 00:13:11.073 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:11.073 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:11.073 Namespace ID: 1 size: 0GB 00:13:11.073 Initialization complete. 00:13:11.073 INFO: using host memory buffer for IO 00:13:11.073 Hello world! 00:13:11.073 [2024-12-11 14:49:53.746263] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:11.073 14:49:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:11.330 [2024-12-11 14:49:54.062797] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:12.701 Initializing NVMe Controllers 00:13:12.701 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:12.701 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:12.701 Initialization complete. Launching workers. 00:13:12.701 submit (in ns) avg, min, max = 6752.4, 3505.6, 4016404.4 00:13:12.701 complete (in ns) avg, min, max = 29397.9, 2050.0, 5000765.6 00:13:12.701 00:13:12.701 Submit histogram 00:13:12.701 ================ 00:13:12.701 Range in us Cumulative Count 00:13:12.701 3.484 - 3.508: 0.0078% ( 1) 00:13:12.701 3.508 - 3.532: 0.3283% ( 41) 00:13:12.701 3.532 - 3.556: 1.3755% ( 134) 00:13:12.701 3.556 - 3.579: 4.0094% ( 337) 00:13:12.701 3.579 - 3.603: 8.7065% ( 601) 00:13:12.701 3.603 - 3.627: 17.9758% ( 1186) 00:13:12.701 3.627 - 3.650: 27.4404% ( 1211) 00:13:12.701 3.650 - 3.674: 38.2102% ( 1378) 00:13:12.701 3.674 - 3.698: 45.4865% ( 931) 00:13:12.701 3.698 - 3.721: 52.8722% ( 945) 00:13:12.701 3.721 - 3.745: 57.9992% ( 656) 00:13:12.701 3.745 - 3.769: 62.8292% ( 618) 00:13:12.701 3.769 - 3.793: 66.9402% ( 526) 00:13:12.701 3.793 - 3.816: 70.3947% ( 442) 00:13:12.701 3.816 - 3.840: 73.3802% ( 382) 00:13:12.701 3.840 - 3.864: 76.7097% ( 426) 00:13:12.701 3.864 - 3.887: 80.4064% ( 473) 00:13:12.701 3.887 - 3.911: 83.6030% ( 409) 00:13:12.701 3.911 - 3.935: 86.3775% ( 355) 00:13:12.701 3.935 - 3.959: 88.1751% ( 230) 00:13:12.701 3.959 - 3.982: 90.1602% ( 254) 00:13:12.701 3.982 - 4.006: 92.0125% ( 237) 00:13:12.701 4.006 - 4.030: 93.2943% ( 164) 00:13:12.701 4.030 - 4.053: 94.2712% ( 125) 00:13:12.701 4.053 - 4.077: 95.0449% ( 99) 00:13:12.701 4.077 - 4.101: 95.4435% ( 51) 00:13:12.701 4.101 - 4.124: 95.7952% ( 45) 00:13:12.701 4.124 - 4.148: 95.9515% ( 20) 00:13:12.701 4.148 - 4.172: 96.1313% ( 23) 00:13:12.701 4.172 - 4.196: 96.2564% ( 16) 00:13:12.701 4.196 - 4.219: 96.3970% ( 18) 00:13:12.701 4.219 - 4.243: 96.4908% ( 12) 00:13:12.701 4.243 - 4.267: 96.6159% ( 16) 00:13:12.701 4.267 - 4.290: 96.7018% ( 11) 00:13:12.701 4.290 - 4.314: 96.8191% ( 15) 00:13:12.701 4.314 - 4.338: 96.8894% ( 9) 00:13:12.701 4.338 - 4.361: 96.9519% ( 8) 00:13:12.701 4.361 - 4.385: 96.9754% ( 3) 00:13:12.702 4.385 - 4.409: 96.9988% ( 3) 00:13:12.702 4.409 - 4.433: 97.0223% ( 3) 00:13:12.702 4.433 - 4.456: 97.0457% ( 3) 00:13:12.702 4.456 - 4.480: 97.0614% ( 2) 00:13:12.702 4.504 - 4.527: 97.0692% ( 1) 00:13:12.702 4.551 - 4.575: 97.0848% ( 2) 00:13:12.702 4.575 - 4.599: 97.1082% ( 3) 00:13:12.702 4.599 - 4.622: 97.1161% ( 1) 00:13:12.702 4.622 - 4.646: 97.1239% ( 1) 00:13:12.702 4.646 - 4.670: 97.1708% ( 6) 00:13:12.702 4.670 - 4.693: 97.2020% ( 4) 00:13:12.702 4.693 - 4.717: 97.2333% ( 4) 00:13:12.702 4.717 - 4.741: 97.2958% ( 8) 00:13:12.702 4.741 - 4.764: 97.3583% ( 8) 00:13:12.702 4.764 - 4.788: 97.4443% ( 11) 00:13:12.702 4.788 - 4.812: 97.4990% ( 7) 00:13:12.702 4.812 - 4.836: 97.5537% ( 7) 00:13:12.702 4.836 - 4.859: 97.6475% ( 12) 00:13:12.702 4.859 - 4.883: 97.7100% ( 8) 00:13:12.702 4.883 - 4.907: 97.7413% ( 4) 00:13:12.702 4.907 - 4.930: 97.8038% ( 8) 00:13:12.702 4.930 - 4.954: 97.8429% ( 5) 00:13:12.702 4.954 - 4.978: 97.8742% ( 4) 00:13:12.702 4.978 - 5.001: 97.9054% ( 4) 00:13:12.702 5.001 - 5.025: 97.9445% ( 5) 00:13:12.702 5.025 - 5.049: 97.9601% ( 2) 00:13:12.702 5.049 - 5.073: 97.9914% ( 4) 00:13:12.702 5.073 - 5.096: 98.0383% ( 6) 00:13:12.702 5.096 - 5.120: 98.0930% ( 7) 00:13:12.702 5.120 - 5.144: 98.1086% ( 2) 00:13:12.702 5.144 - 5.167: 98.1243% ( 2) 00:13:12.702 5.191 - 5.215: 98.1399% ( 2) 00:13:12.702 5.215 - 5.239: 98.1633% ( 3) 00:13:12.702 5.239 - 5.262: 98.1868% ( 3) 00:13:12.702 5.262 - 5.286: 98.1946% ( 1) 00:13:12.702 5.310 - 5.333: 98.2181% ( 3) 00:13:12.702 5.404 - 5.428: 98.2259% ( 1) 00:13:12.702 5.428 - 5.452: 98.2337% ( 1) 00:13:12.702 5.452 - 5.476: 98.2493% ( 2) 00:13:12.702 5.523 - 5.547: 98.2806% ( 4) 00:13:12.702 5.570 - 5.594: 98.2884% ( 1) 00:13:12.702 5.713 - 5.736: 98.2962% ( 1) 00:13:12.702 5.807 - 5.831: 98.3040% ( 1) 00:13:12.702 5.879 - 5.902: 98.3118% ( 1) 00:13:12.702 6.258 - 6.305: 98.3197% ( 1) 00:13:12.702 6.305 - 6.353: 98.3275% ( 1) 00:13:12.702 6.447 - 6.495: 98.3431% ( 2) 00:13:12.702 6.590 - 6.637: 98.3509% ( 1) 00:13:12.702 6.684 - 6.732: 98.3587% ( 1) 00:13:12.702 6.921 - 6.969: 98.3665% ( 1) 00:13:12.702 6.969 - 7.016: 98.3744% ( 1) 00:13:12.702 7.206 - 7.253: 98.3822% ( 1) 00:13:12.702 7.253 - 7.301: 98.3900% ( 1) 00:13:12.702 7.301 - 7.348: 98.3978% ( 1) 00:13:12.702 7.443 - 7.490: 98.4056% ( 1) 00:13:12.702 7.490 - 7.538: 98.4134% ( 1) 00:13:12.702 7.633 - 7.680: 98.4291% ( 2) 00:13:12.702 7.822 - 7.870: 98.4369% ( 1) 00:13:12.702 7.917 - 7.964: 98.4447% ( 1) 00:13:12.702 8.012 - 8.059: 98.4525% ( 1) 00:13:12.702 8.059 - 8.107: 98.4603% ( 1) 00:13:12.702 8.154 - 8.201: 98.4682% ( 1) 00:13:12.702 8.296 - 8.344: 98.4838% ( 2) 00:13:12.702 8.391 - 8.439: 98.4916% ( 1) 00:13:12.702 8.486 - 8.533: 98.4994% ( 1) 00:13:12.702 8.533 - 8.581: 98.5072% ( 1) 00:13:12.702 8.676 - 8.723: 98.5150% ( 1) 00:13:12.702 8.723 - 8.770: 98.5229% ( 1) 00:13:12.702 8.770 - 8.818: 98.5307% ( 1) 00:13:12.702 8.960 - 9.007: 98.5385% ( 1) 00:13:12.702 9.007 - 9.055: 98.5463% ( 1) 00:13:12.702 9.055 - 9.102: 98.5541% ( 1) 00:13:12.702 9.102 - 9.150: 98.5776% ( 3) 00:13:12.702 9.197 - 9.244: 98.5932% ( 2) 00:13:12.702 9.292 - 9.339: 98.6166% ( 3) 00:13:12.702 9.339 - 9.387: 98.6323% ( 2) 00:13:12.702 9.434 - 9.481: 98.6401% ( 1) 00:13:12.702 9.481 - 9.529: 98.6479% ( 1) 00:13:12.702 9.624 - 9.671: 98.6557% ( 1) 00:13:12.702 9.719 - 9.766: 98.6714% ( 2) 00:13:12.702 9.813 - 9.861: 98.6792% ( 1) 00:13:12.702 9.861 - 9.908: 98.6948% ( 2) 00:13:12.702 9.908 - 9.956: 98.7026% ( 1) 00:13:12.702 10.050 - 10.098: 98.7104% ( 1) 00:13:12.702 10.098 - 10.145: 98.7182% ( 1) 00:13:12.702 10.193 - 10.240: 98.7261% ( 1) 00:13:12.702 10.335 - 10.382: 98.7339% ( 1) 00:13:12.702 10.477 - 10.524: 98.7573% ( 3) 00:13:12.702 10.524 - 10.572: 98.7651% ( 1) 00:13:12.702 10.619 - 10.667: 98.7730% ( 1) 00:13:12.702 10.714 - 10.761: 98.7808% ( 1) 00:13:12.702 10.856 - 10.904: 98.7886% ( 1) 00:13:12.702 10.904 - 10.951: 98.7964% ( 1) 00:13:12.702 10.999 - 11.046: 98.8120% ( 2) 00:13:12.702 11.046 - 11.093: 98.8199% ( 1) 00:13:12.702 11.093 - 11.141: 98.8277% ( 1) 00:13:12.702 11.188 - 11.236: 98.8355% ( 1) 00:13:12.702 11.283 - 11.330: 98.8433% ( 1) 00:13:12.702 11.520 - 11.567: 98.8511% ( 1) 00:13:12.702 11.662 - 11.710: 98.8589% ( 1) 00:13:12.702 11.757 - 11.804: 98.8667% ( 1) 00:13:12.702 11.947 - 11.994: 98.8746% ( 1) 00:13:12.702 12.136 - 12.231: 98.8902% ( 2) 00:13:12.702 12.326 - 12.421: 98.9058% ( 2) 00:13:12.702 12.516 - 12.610: 98.9136% ( 1) 00:13:12.702 12.610 - 12.705: 98.9215% ( 1) 00:13:12.702 13.084 - 13.179: 98.9293% ( 1) 00:13:12.702 13.274 - 13.369: 98.9371% ( 1) 00:13:12.702 13.464 - 13.559: 98.9449% ( 1) 00:13:12.702 13.559 - 13.653: 98.9527% ( 1) 00:13:12.702 13.653 - 13.748: 98.9683% ( 2) 00:13:12.702 13.748 - 13.843: 98.9918% ( 3) 00:13:12.702 13.843 - 13.938: 98.9996% ( 1) 00:13:12.702 14.033 - 14.127: 99.0074% ( 1) 00:13:12.702 14.317 - 14.412: 99.0152% ( 1) 00:13:12.702 14.412 - 14.507: 99.0231% ( 1) 00:13:12.702 14.601 - 14.696: 99.0309% ( 1) 00:13:12.702 14.696 - 14.791: 99.0387% ( 1) 00:13:12.702 15.644 - 15.739: 99.0465% ( 1) 00:13:12.702 17.067 - 17.161: 99.0543% ( 1) 00:13:12.702 17.161 - 17.256: 99.0778% ( 3) 00:13:12.702 17.256 - 17.351: 99.0856% ( 1) 00:13:12.702 17.351 - 17.446: 99.1325% ( 6) 00:13:12.702 17.446 - 17.541: 99.1637% ( 4) 00:13:12.702 17.541 - 17.636: 99.1950% ( 4) 00:13:12.702 17.636 - 17.730: 99.2419% ( 6) 00:13:12.702 17.730 - 17.825: 99.2810% ( 5) 00:13:12.702 17.825 - 17.920: 99.3200% ( 5) 00:13:12.702 17.920 - 18.015: 99.3826% ( 8) 00:13:12.702 18.015 - 18.110: 99.4373% ( 7) 00:13:12.702 18.110 - 18.204: 99.5311% ( 12) 00:13:12.702 18.204 - 18.299: 99.6014% ( 9) 00:13:12.702 18.299 - 18.394: 99.6561% ( 7) 00:13:12.702 18.394 - 18.489: 99.7265% ( 9) 00:13:12.702 18.489 - 18.584: 99.7499% ( 3) 00:13:12.702 18.584 - 18.679: 99.7968% ( 6) 00:13:12.702 18.679 - 18.773: 99.8359% ( 5) 00:13:12.702 18.773 - 18.868: 99.8593% ( 3) 00:13:12.702 18.868 - 18.963: 99.8750% ( 2) 00:13:12.702 18.963 - 19.058: 99.8828% ( 1) 00:13:12.702 19.058 - 19.153: 99.8906% ( 1) 00:13:12.702 19.627 - 19.721: 99.8984% ( 1) 00:13:12.702 21.239 - 21.333: 99.9062% ( 1) 00:13:12.702 22.187 - 22.281: 99.9140% ( 1) 00:13:12.702 24.083 - 24.178: 99.9218% ( 1) 00:13:12.702 27.686 - 27.876: 99.9297% ( 1) 00:13:12.702 3980.705 - 4004.978: 99.9844% ( 7) 00:13:12.702 4004.978 - 4029.250: 100.0000% ( 2) 00:13:12.702 00:13:12.702 Complete histogram 00:13:12.702 ================== 00:13:12.702 Range in us Cumulative Count 00:13:12.702 2.039 - 2.050: 0.0078% ( 1) 00:13:12.702 2.050 - 2.062: 7.4091% ( 947) 00:13:12.702 2.062 - 2.074: 30.3322% ( 2933) 00:13:12.702 2.074 - 2.086: 34.4666% ( 529) 00:13:12.702 2.086 - 2.098: 45.9555% ( 1470) 00:13:12.702 2.098 - 2.110: 59.2732% ( 1704) 00:13:12.702 2.110 - 2.121: 62.4306% ( 404) 00:13:12.702 2.121 - 2.133: 70.5510% ( 1039) 00:13:12.702 2.133 - 2.145: 76.6002% ( 774) 00:13:12.702 2.145 - 2.157: 78.1868% ( 203) 00:13:12.702 2.157 - 2.169: 84.2829% ( 780) 00:13:12.702 2.169 - 2.181: 88.0188% ( 478) 00:13:12.702 2.181 - 2.193: 88.9644% ( 121) 00:13:12.702 2.193 - 2.204: 90.2931% ( 170) 00:13:12.702 2.204 - 2.216: 91.9187% ( 208) 00:13:12.702 2.216 - 2.228: 93.4428% ( 195) 00:13:12.702 2.228 - 2.240: 94.0367% ( 76) 00:13:12.702 2.240 - 2.252: 94.4510% ( 53) 00:13:12.702 2.252 - 2.264: 94.6151% ( 21) 00:13:12.702 2.264 - 2.276: 94.7714% ( 20) 00:13:12.702 2.276 - 2.287: 95.1309% ( 46) 00:13:12.702 2.287 - 2.299: 95.3888% ( 33) 00:13:12.702 2.299 - 2.311: 95.4357% ( 6) 00:13:12.702 2.311 - 2.323: 95.4513% ( 2) 00:13:12.702 2.323 - 2.335: 95.4904% ( 5) 00:13:12.702 2.335 - 2.347: 95.5451% ( 7) 00:13:12.702 2.347 - 2.359: 95.6780% ( 17) 00:13:12.702 2.359 - 2.370: 95.9828% ( 39) 00:13:12.702 2.370 - 2.382: 96.1547% ( 22) 00:13:12.702 2.382 - 2.394: 96.3970% ( 31) 00:13:12.702 2.394 - 2.406: 96.7175% ( 41) 00:13:12.702 2.406 - 2.418: 96.9519% ( 30) 00:13:12.702 2.418 - 2.430: 97.2020% ( 32) 00:13:12.702 2.430 - 2.441: 97.3505% ( 19) 00:13:12.702 2.441 - 2.453: 97.4756% ( 16) 00:13:12.702 2.453 - 2.465: 97.5772% ( 13) 00:13:12.703 2.465 - 2.477: 97.6319% ( 7) 00:13:12.703 2.477 - 2.489: 97.7335% ( 13) 00:13:12.703 2.489 - 2.501: 97.7726% ( 5) 00:13:12.703 2.501 - 2.513: 97.8273% ( 7) 00:13:12.703 2.513 - 2.524: 97.8664% ( 5) 00:13:12.703 2.524 - 2.536: 97.9054% ( 5) 00:13:12.703 2.536 - 2.548: 97.9367% ( 4) 00:13:12.703 2.548 - 2.560: 97.9914% ( 7) 00:13:12.703 2.560 - 2.572: 98.0070% ( 2) 00:13:12.703 2.572 - 2.584: 98.0227% ( 2) 00:13:12.703 2.584 - 2.596: 98.0383% ( 2) 00:13:12.703 2.607 - 2.619: 98.0539% ( 2) 00:13:12.703 2.619 - 2.631: 98.0930% ( 5) 00:13:12.703 2.631 - 2.643: 98.1165% ( 3) 00:13:12.703 2.643 - 2.655: 98.1321% ( 2) 00:13:12.703 2.655 - 2.667: 98.1399% ( 1) 00:13:12.703 2.667 - 2.679: 98.1633% ( 3) 00:13:12.703 2.679 - 2.690: 98.1946% ( 4) 00:13:12.703 2.690 - 2.702: 98.2024% ( 1) 00:13:12.703 2.702 - 2.714: 98.2337% ( 4) 00:13:12.703 2.726 - 2.738: 98.2415% ( 1) 00:13:12.703 2.738 - 2.750: 98.2571% ( 2) 00:13:12.703 2.761 - 2.773: 98.2728% ( 2) 00:13:12.703 2.785 - 2.797: 98.2806% ( 1) 00:13:12.703 2.797 - 2.809: 98.2884% ( 1) 00:13:12.703 2.809 - 2.821: 98.2962% ( 1) 00:13:12.703 2.821 - 2.833: 98.3353% ( 5) 00:13:12.703 2.844 - 2.856: 98.3509% ( 2) 00:13:12.703 2.856 - 2.868: 98.3587% ( 1) 00:13:12.703 2.892 - 2.904: 9[2024-12-11 14:49:55.157326] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:12.703 8.3665% ( 1) 00:13:12.703 2.904 - 2.916: 98.3744% ( 1) 00:13:12.703 2.927 - 2.939: 98.3822% ( 1) 00:13:12.703 2.951 - 2.963: 98.3900% ( 1) 00:13:12.703 3.022 - 3.034: 98.4134% ( 3) 00:13:12.703 3.058 - 3.081: 98.4369% ( 3) 00:13:12.703 3.081 - 3.105: 98.4447% ( 1) 00:13:12.703 3.129 - 3.153: 98.4525% ( 1) 00:13:12.703 3.176 - 3.200: 98.4682% ( 2) 00:13:12.703 3.200 - 3.224: 98.4760% ( 1) 00:13:12.703 3.224 - 3.247: 98.4994% ( 3) 00:13:12.703 3.295 - 3.319: 98.5072% ( 1) 00:13:12.703 3.319 - 3.342: 98.5150% ( 1) 00:13:12.703 3.366 - 3.390: 98.5229% ( 1) 00:13:12.703 3.390 - 3.413: 98.5307% ( 1) 00:13:12.703 3.413 - 3.437: 98.5385% ( 1) 00:13:12.703 3.437 - 3.461: 98.5463% ( 1) 00:13:12.703 3.461 - 3.484: 98.5541% ( 1) 00:13:12.703 3.484 - 3.508: 98.5698% ( 2) 00:13:12.703 3.508 - 3.532: 98.5776% ( 1) 00:13:12.703 3.532 - 3.556: 98.5932% ( 2) 00:13:12.703 3.556 - 3.579: 98.6088% ( 2) 00:13:12.703 3.579 - 3.603: 98.6323% ( 3) 00:13:12.703 3.627 - 3.650: 98.6401% ( 1) 00:13:12.703 3.745 - 3.769: 98.6479% ( 1) 00:13:12.703 3.769 - 3.793: 98.6714% ( 3) 00:13:12.703 3.816 - 3.840: 98.6792% ( 1) 00:13:12.703 3.864 - 3.887: 98.6870% ( 1) 00:13:12.703 3.959 - 3.982: 98.6948% ( 1) 00:13:12.703 3.982 - 4.006: 98.7026% ( 1) 00:13:12.703 4.006 - 4.030: 98.7104% ( 1) 00:13:12.703 4.030 - 4.053: 98.7182% ( 1) 00:13:12.703 4.053 - 4.077: 98.7261% ( 1) 00:13:12.703 4.077 - 4.101: 98.7339% ( 1) 00:13:12.703 4.314 - 4.338: 98.7417% ( 1) 00:13:12.703 5.144 - 5.167: 98.7495% ( 1) 00:13:12.703 6.163 - 6.210: 98.7651% ( 2) 00:13:12.703 6.210 - 6.258: 98.7730% ( 1) 00:13:12.703 6.542 - 6.590: 98.7808% ( 1) 00:13:12.703 6.590 - 6.637: 98.7886% ( 1) 00:13:12.703 6.779 - 6.827: 98.8120% ( 3) 00:13:12.703 6.827 - 6.874: 98.8199% ( 1) 00:13:12.703 7.443 - 7.490: 98.8277% ( 1) 00:13:12.703 7.490 - 7.538: 98.8355% ( 1) 00:13:12.703 7.538 - 7.585: 98.8433% ( 1) 00:13:12.703 7.585 - 7.633: 98.8511% ( 1) 00:13:12.703 7.822 - 7.870: 98.8589% ( 1) 00:13:12.703 8.154 - 8.201: 98.8667% ( 1) 00:13:12.703 8.201 - 8.249: 98.8746% ( 1) 00:13:12.703 8.439 - 8.486: 98.8824% ( 1) 00:13:12.703 9.387 - 9.434: 98.8902% ( 1) 00:13:12.703 9.719 - 9.766: 98.8980% ( 1) 00:13:12.703 10.335 - 10.382: 98.9058% ( 1) 00:13:12.703 15.360 - 15.455: 98.9136% ( 1) 00:13:12.703 15.644 - 15.739: 98.9215% ( 1) 00:13:12.703 15.739 - 15.834: 98.9293% ( 1) 00:13:12.703 15.834 - 15.929: 98.9449% ( 2) 00:13:12.703 15.929 - 16.024: 98.9683% ( 3) 00:13:12.703 16.024 - 16.119: 99.0074% ( 5) 00:13:12.703 16.119 - 16.213: 99.0152% ( 1) 00:13:12.703 16.213 - 16.308: 99.0387% ( 3) 00:13:12.703 16.308 - 16.403: 99.0543% ( 2) 00:13:12.703 16.403 - 16.498: 99.1012% ( 6) 00:13:12.703 16.498 - 16.593: 99.1559% ( 7) 00:13:12.703 16.593 - 16.687: 99.1794% ( 3) 00:13:12.703 16.687 - 16.782: 99.2028% ( 3) 00:13:12.703 16.782 - 16.877: 99.2497% ( 6) 00:13:12.703 16.972 - 17.067: 99.2575% ( 1) 00:13:12.703 17.067 - 17.161: 99.2732% ( 2) 00:13:12.703 17.256 - 17.351: 99.2810% ( 1) 00:13:12.703 17.446 - 17.541: 99.2888% ( 1) 00:13:12.703 17.541 - 17.636: 99.2966% ( 1) 00:13:12.703 18.584 - 18.679: 99.3044% ( 1) 00:13:12.703 18.773 - 18.868: 99.3122% ( 1) 00:13:12.703 20.859 - 20.954: 99.3200% ( 1) 00:13:12.703 2949.120 - 2961.256: 99.3279% ( 1) 00:13:12.703 3021.938 - 3034.074: 99.3357% ( 1) 00:13:12.703 3980.705 - 4004.978: 99.8750% ( 69) 00:13:12.703 4004.978 - 4029.250: 99.9844% ( 14) 00:13:12.703 4975.881 - 5000.154: 99.9922% ( 1) 00:13:12.703 5000.154 - 5024.427: 100.0000% ( 1) 00:13:12.703 00:13:12.703 14:49:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:13:12.703 14:49:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:12.703 14:49:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:13:12.703 14:49:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:13:12.703 14:49:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:12.961 [ 00:13:12.961 { 00:13:12.961 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:12.961 "subtype": "Discovery", 00:13:12.961 "listen_addresses": [], 00:13:12.961 "allow_any_host": true, 00:13:12.961 "hosts": [] 00:13:12.961 }, 00:13:12.961 { 00:13:12.961 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:12.961 "subtype": "NVMe", 00:13:12.961 "listen_addresses": [ 00:13:12.961 { 00:13:12.961 "trtype": "VFIOUSER", 00:13:12.961 "adrfam": "IPv4", 00:13:12.961 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:12.961 "trsvcid": "0" 00:13:12.961 } 00:13:12.961 ], 00:13:12.961 "allow_any_host": true, 00:13:12.961 "hosts": [], 00:13:12.961 "serial_number": "SPDK1", 00:13:12.961 "model_number": "SPDK bdev Controller", 00:13:12.961 "max_namespaces": 32, 00:13:12.961 "min_cntlid": 1, 00:13:12.961 "max_cntlid": 65519, 00:13:12.961 "namespaces": [ 00:13:12.961 { 00:13:12.961 "nsid": 1, 00:13:12.961 "bdev_name": "Malloc1", 00:13:12.961 "name": "Malloc1", 00:13:12.961 "nguid": "FCCE59DA9B7246D6B3C68DEA2C9732CF", 00:13:12.961 "uuid": "fcce59da-9b72-46d6-b3c6-8dea2c9732cf" 00:13:12.961 }, 00:13:12.961 { 00:13:12.961 "nsid": 2, 00:13:12.961 "bdev_name": "Malloc3", 00:13:12.961 "name": "Malloc3", 00:13:12.961 "nguid": "A1CEC322A166466C846767798DBAD457", 00:13:12.961 "uuid": "a1cec322-a166-466c-8467-67798dbad457" 00:13:12.961 } 00:13:12.961 ] 00:13:12.961 }, 00:13:12.961 { 00:13:12.961 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:12.961 "subtype": "NVMe", 00:13:12.961 "listen_addresses": [ 00:13:12.961 { 00:13:12.961 "trtype": "VFIOUSER", 00:13:12.961 "adrfam": "IPv4", 00:13:12.961 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:12.961 "trsvcid": "0" 00:13:12.961 } 00:13:12.961 ], 00:13:12.961 "allow_any_host": true, 00:13:12.961 "hosts": [], 00:13:12.961 "serial_number": "SPDK2", 00:13:12.961 "model_number": "SPDK bdev Controller", 00:13:12.961 "max_namespaces": 32, 00:13:12.961 "min_cntlid": 1, 00:13:12.961 "max_cntlid": 65519, 00:13:12.961 "namespaces": [ 00:13:12.961 { 00:13:12.961 "nsid": 1, 00:13:12.961 "bdev_name": "Malloc2", 00:13:12.961 "name": "Malloc2", 00:13:12.961 "nguid": "FBE172E647464DC299198F5F92819C61", 00:13:12.961 "uuid": "fbe172e6-4746-4dc2-9919-8f5f92819c61" 00:13:12.961 } 00:13:12.961 ] 00:13:12.961 } 00:13:12.961 ] 00:13:12.961 14:49:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:12.961 14:49:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=649398 00:13:12.961 14:49:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:13:12.961 14:49:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:12.961 14:49:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:13:12.961 14:49:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:12.961 14:49:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:12.961 14:49:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:13:12.961 14:49:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:12.961 14:49:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:13:12.961 [2024-12-11 14:49:55.711044] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:13.219 Malloc4 00:13:13.219 14:49:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:13:13.476 [2024-12-11 14:49:56.109046] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:13.476 14:49:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:13.476 Asynchronous Event Request test 00:13:13.476 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:13.476 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:13.477 Registering asynchronous event callbacks... 00:13:13.477 Starting namespace attribute notice tests for all controllers... 00:13:13.477 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:13.477 aer_cb - Changed Namespace 00:13:13.477 Cleaning up... 00:13:13.735 [ 00:13:13.735 { 00:13:13.735 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:13.735 "subtype": "Discovery", 00:13:13.735 "listen_addresses": [], 00:13:13.735 "allow_any_host": true, 00:13:13.735 "hosts": [] 00:13:13.735 }, 00:13:13.735 { 00:13:13.735 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:13.735 "subtype": "NVMe", 00:13:13.735 "listen_addresses": [ 00:13:13.735 { 00:13:13.735 "trtype": "VFIOUSER", 00:13:13.735 "adrfam": "IPv4", 00:13:13.735 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:13.735 "trsvcid": "0" 00:13:13.735 } 00:13:13.735 ], 00:13:13.735 "allow_any_host": true, 00:13:13.735 "hosts": [], 00:13:13.735 "serial_number": "SPDK1", 00:13:13.735 "model_number": "SPDK bdev Controller", 00:13:13.735 "max_namespaces": 32, 00:13:13.735 "min_cntlid": 1, 00:13:13.735 "max_cntlid": 65519, 00:13:13.735 "namespaces": [ 00:13:13.735 { 00:13:13.735 "nsid": 1, 00:13:13.735 "bdev_name": "Malloc1", 00:13:13.735 "name": "Malloc1", 00:13:13.735 "nguid": "FCCE59DA9B7246D6B3C68DEA2C9732CF", 00:13:13.735 "uuid": "fcce59da-9b72-46d6-b3c6-8dea2c9732cf" 00:13:13.735 }, 00:13:13.735 { 00:13:13.735 "nsid": 2, 00:13:13.735 "bdev_name": "Malloc3", 00:13:13.735 "name": "Malloc3", 00:13:13.735 "nguid": "A1CEC322A166466C846767798DBAD457", 00:13:13.735 "uuid": "a1cec322-a166-466c-8467-67798dbad457" 00:13:13.735 } 00:13:13.735 ] 00:13:13.735 }, 00:13:13.735 { 00:13:13.735 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:13.735 "subtype": "NVMe", 00:13:13.735 "listen_addresses": [ 00:13:13.735 { 00:13:13.735 "trtype": "VFIOUSER", 00:13:13.735 "adrfam": "IPv4", 00:13:13.735 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:13.735 "trsvcid": "0" 00:13:13.735 } 00:13:13.735 ], 00:13:13.735 "allow_any_host": true, 00:13:13.735 "hosts": [], 00:13:13.735 "serial_number": "SPDK2", 00:13:13.735 "model_number": "SPDK bdev Controller", 00:13:13.735 "max_namespaces": 32, 00:13:13.735 "min_cntlid": 1, 00:13:13.735 "max_cntlid": 65519, 00:13:13.735 "namespaces": [ 00:13:13.735 { 00:13:13.735 "nsid": 1, 00:13:13.735 "bdev_name": "Malloc2", 00:13:13.735 "name": "Malloc2", 00:13:13.735 "nguid": "FBE172E647464DC299198F5F92819C61", 00:13:13.735 "uuid": "fbe172e6-4746-4dc2-9919-8f5f92819c61" 00:13:13.735 }, 00:13:13.735 { 00:13:13.735 "nsid": 2, 00:13:13.735 "bdev_name": "Malloc4", 00:13:13.735 "name": "Malloc4", 00:13:13.735 "nguid": "97095A26427A40A782F0092B99C42E50", 00:13:13.735 "uuid": "97095a26-427a-40a7-82f0-092b99c42e50" 00:13:13.735 } 00:13:13.735 ] 00:13:13.735 } 00:13:13.735 ] 00:13:13.735 14:49:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 649398 00:13:13.735 14:49:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:13:13.735 14:49:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 643801 00:13:13.735 14:49:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 643801 ']' 00:13:13.735 14:49:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 643801 00:13:13.735 14:49:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:13:13.735 14:49:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:13.735 14:49:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 643801 00:13:13.735 14:49:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:13.735 14:49:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:13.735 14:49:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 643801' 00:13:13.735 killing process with pid 643801 00:13:13.735 14:49:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 643801 00:13:13.735 14:49:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 643801 00:13:14.301 14:49:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:14.301 14:49:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:14.301 14:49:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:13:14.301 14:49:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:13:14.301 14:49:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:13:14.301 14:49:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=649548 00:13:14.301 14:49:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:13:14.301 14:49:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 649548' 00:13:14.301 Process pid: 649548 00:13:14.301 14:49:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:14.301 14:49:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 649548 00:13:14.301 14:49:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 649548 ']' 00:13:14.301 14:49:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:14.301 14:49:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:14.301 14:49:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:14.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:14.301 14:49:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:14.301 14:49:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:14.301 [2024-12-11 14:49:56.828733] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:13:14.301 [2024-12-11 14:49:56.829988] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:13:14.301 [2024-12-11 14:49:56.830056] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:14.301 [2024-12-11 14:49:56.900406] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:14.301 [2024-12-11 14:49:56.960803] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:14.301 [2024-12-11 14:49:56.960864] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:14.301 [2024-12-11 14:49:56.960893] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:14.301 [2024-12-11 14:49:56.960904] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:14.301 [2024-12-11 14:49:56.960914] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:14.301 [2024-12-11 14:49:56.962460] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:13:14.301 [2024-12-11 14:49:56.962486] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:13:14.301 [2024-12-11 14:49:56.962553] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:13:14.301 [2024-12-11 14:49:56.962555] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:14.301 [2024-12-11 14:49:57.059689] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:13:14.301 [2024-12-11 14:49:57.059895] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:13:14.301 [2024-12-11 14:49:57.060236] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:13:14.301 [2024-12-11 14:49:57.060937] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:13:14.301 [2024-12-11 14:49:57.061162] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:13:14.559 14:49:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:14.559 14:49:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:13:14.559 14:49:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:15.493 14:49:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:13:15.753 14:49:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:15.753 14:49:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:15.753 14:49:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:15.753 14:49:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:15.753 14:49:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:16.013 Malloc1 00:13:16.013 14:49:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:16.273 14:49:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:16.840 14:49:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:16.840 14:49:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:16.840 14:49:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:16.840 14:49:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:17.098 Malloc2 00:13:17.356 14:49:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:17.614 14:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:17.871 14:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:18.129 14:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:13:18.129 14:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 649548 00:13:18.129 14:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 649548 ']' 00:13:18.129 14:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 649548 00:13:18.129 14:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:13:18.129 14:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:18.129 14:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 649548 00:13:18.129 14:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:18.129 14:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:18.129 14:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 649548' 00:13:18.129 killing process with pid 649548 00:13:18.129 14:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 649548 00:13:18.129 14:50:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 649548 00:13:18.388 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:18.388 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:18.388 00:13:18.388 real 0m53.522s 00:13:18.388 user 3m26.803s 00:13:18.388 sys 0m3.915s 00:13:18.388 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:18.388 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:18.388 ************************************ 00:13:18.388 END TEST nvmf_vfio_user 00:13:18.388 ************************************ 00:13:18.388 14:50:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:18.388 14:50:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:18.388 14:50:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:18.388 14:50:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:18.388 ************************************ 00:13:18.388 START TEST nvmf_vfio_user_nvme_compliance 00:13:18.388 ************************************ 00:13:18.388 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:18.388 * Looking for test storage... 00:13:18.647 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:13:18.647 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:18.647 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lcov --version 00:13:18.647 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:18.647 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:18.647 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:18.647 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:18.647 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:18.647 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:13:18.647 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:13:18.647 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:13:18.647 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:13:18.647 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:13:18.647 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:13:18.647 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:13:18.647 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:18.647 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:13:18.647 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:13:18.647 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:18.647 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:18.647 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:13:18.647 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:13:18.647 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:18.647 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:13:18.647 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:13:18.647 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:13:18.647 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:13:18.647 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:18.647 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:13:18.647 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:13:18.647 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:18.647 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:18.647 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:13:18.647 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:18.647 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:18.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:18.647 --rc genhtml_branch_coverage=1 00:13:18.647 --rc genhtml_function_coverage=1 00:13:18.647 --rc genhtml_legend=1 00:13:18.647 --rc geninfo_all_blocks=1 00:13:18.647 --rc geninfo_unexecuted_blocks=1 00:13:18.647 00:13:18.647 ' 00:13:18.647 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:18.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:18.647 --rc genhtml_branch_coverage=1 00:13:18.647 --rc genhtml_function_coverage=1 00:13:18.647 --rc genhtml_legend=1 00:13:18.647 --rc geninfo_all_blocks=1 00:13:18.647 --rc geninfo_unexecuted_blocks=1 00:13:18.647 00:13:18.647 ' 00:13:18.647 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:18.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:18.647 --rc genhtml_branch_coverage=1 00:13:18.647 --rc genhtml_function_coverage=1 00:13:18.647 --rc genhtml_legend=1 00:13:18.647 --rc geninfo_all_blocks=1 00:13:18.647 --rc geninfo_unexecuted_blocks=1 00:13:18.647 00:13:18.647 ' 00:13:18.647 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:18.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:18.647 --rc genhtml_branch_coverage=1 00:13:18.647 --rc genhtml_function_coverage=1 00:13:18.647 --rc genhtml_legend=1 00:13:18.647 --rc geninfo_all_blocks=1 00:13:18.647 --rc geninfo_unexecuted_blocks=1 00:13:18.647 00:13:18.647 ' 00:13:18.647 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:18.647 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:13:18.647 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:18.647 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:18.647 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:18.647 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:18.647 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:18.647 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:18.647 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:18.647 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:18.647 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:18.647 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:18.647 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:18.647 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:18.647 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:18.647 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:18.647 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:18.647 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:18.647 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:18.647 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:13:18.647 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:18.647 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:18.647 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:18.647 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.647 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.648 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.648 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:13:18.648 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.648 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:13:18.648 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:18.648 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:18.648 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:18.648 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:18.648 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:18.648 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:18.648 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:18.648 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:18.648 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:18.648 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:18.648 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:18.648 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:18.648 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:13:18.648 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:13:18.648 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:13:18.648 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=650152 00:13:18.648 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:18.648 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 650152' 00:13:18.648 Process pid: 650152 00:13:18.648 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:18.648 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 650152 00:13:18.648 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 650152 ']' 00:13:18.648 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:18.648 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:18.648 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:18.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:18.648 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:18.648 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:18.648 [2024-12-11 14:50:01.329187] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:13:18.648 [2024-12-11 14:50:01.329286] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:18.648 [2024-12-11 14:50:01.397950] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:18.906 [2024-12-11 14:50:01.459521] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:18.906 [2024-12-11 14:50:01.459598] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:18.906 [2024-12-11 14:50:01.459636] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:18.906 [2024-12-11 14:50:01.459648] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:18.906 [2024-12-11 14:50:01.459658] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:18.906 [2024-12-11 14:50:01.461181] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:13:18.906 [2024-12-11 14:50:01.461215] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:13:18.906 [2024-12-11 14:50:01.461217] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:18.906 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:18.906 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:13:18.906 14:50:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:13:20.281 14:50:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:20.281 14:50:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:13:20.281 14:50:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:20.281 14:50:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.281 14:50:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:20.281 14:50:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.281 14:50:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:13:20.281 14:50:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:20.281 14:50:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.281 14:50:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:20.281 malloc0 00:13:20.281 14:50:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.281 14:50:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:13:20.281 14:50:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.281 14:50:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:20.281 14:50:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.281 14:50:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:20.281 14:50:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.281 14:50:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:20.281 14:50:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.281 14:50:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:20.281 14:50:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.281 14:50:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:20.281 14:50:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.281 14:50:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:13:20.281 00:13:20.281 00:13:20.281 CUnit - A unit testing framework for C - Version 2.1-3 00:13:20.281 http://cunit.sourceforge.net/ 00:13:20.281 00:13:20.281 00:13:20.281 Suite: nvme_compliance 00:13:20.281 Test: admin_identify_ctrlr_verify_dptr ...[2024-12-11 14:50:02.876193] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:20.281 [2024-12-11 14:50:02.877705] vfio_user.c: 832:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:13:20.281 [2024-12-11 14:50:02.877731] vfio_user.c:5544:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:13:20.281 [2024-12-11 14:50:02.877744] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:13:20.281 [2024-12-11 14:50:02.879217] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:20.281 passed 00:13:20.281 Test: admin_identify_ctrlr_verify_fused ...[2024-12-11 14:50:02.964869] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:20.281 [2024-12-11 14:50:02.967873] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:20.281 passed 00:13:20.539 Test: admin_identify_ns ...[2024-12-11 14:50:03.056071] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:20.539 [2024-12-11 14:50:03.116581] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:13:20.539 [2024-12-11 14:50:03.124565] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:13:20.539 [2024-12-11 14:50:03.145681] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:20.539 passed 00:13:20.539 Test: admin_get_features_mandatory_features ...[2024-12-11 14:50:03.228259] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:20.539 [2024-12-11 14:50:03.231276] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:20.539 passed 00:13:20.797 Test: admin_get_features_optional_features ...[2024-12-11 14:50:03.312823] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:20.797 [2024-12-11 14:50:03.317856] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:20.797 passed 00:13:20.797 Test: admin_set_features_number_of_queues ...[2024-12-11 14:50:03.398045] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:20.797 [2024-12-11 14:50:03.506667] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:20.797 passed 00:13:21.055 Test: admin_get_log_page_mandatory_logs ...[2024-12-11 14:50:03.587389] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:21.055 [2024-12-11 14:50:03.592421] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:21.055 passed 00:13:21.055 Test: admin_get_log_page_with_lpo ...[2024-12-11 14:50:03.672740] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:21.055 [2024-12-11 14:50:03.740561] ctrlr.c:2700:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:13:21.055 [2024-12-11 14:50:03.753656] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:21.055 passed 00:13:21.313 Test: fabric_property_get ...[2024-12-11 14:50:03.836762] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:21.313 [2024-12-11 14:50:03.838065] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:13:21.313 [2024-12-11 14:50:03.839784] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:21.313 passed 00:13:21.313 Test: admin_delete_io_sq_use_admin_qid ...[2024-12-11 14:50:03.927398] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:21.313 [2024-12-11 14:50:03.928711] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:13:21.313 [2024-12-11 14:50:03.930419] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:21.313 passed 00:13:21.313 Test: admin_delete_io_sq_delete_sq_twice ...[2024-12-11 14:50:04.011739] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:21.571 [2024-12-11 14:50:04.096574] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:21.571 [2024-12-11 14:50:04.112556] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:21.571 [2024-12-11 14:50:04.117722] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:21.571 passed 00:13:21.571 Test: admin_delete_io_cq_use_admin_qid ...[2024-12-11 14:50:04.198967] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:21.571 [2024-12-11 14:50:04.200263] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:13:21.571 [2024-12-11 14:50:04.201986] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:21.571 passed 00:13:21.571 Test: admin_delete_io_cq_delete_cq_first ...[2024-12-11 14:50:04.287162] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:21.829 [2024-12-11 14:50:04.363555] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:21.829 [2024-12-11 14:50:04.387556] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:21.829 [2024-12-11 14:50:04.392687] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:21.829 passed 00:13:21.829 Test: admin_create_io_cq_verify_iv_pc ...[2024-12-11 14:50:04.476388] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:21.829 [2024-12-11 14:50:04.477759] vfio_user.c:2178:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:13:21.829 [2024-12-11 14:50:04.477800] vfio_user.c:2172:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:13:21.829 [2024-12-11 14:50:04.479419] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:21.829 passed 00:13:21.829 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-12-11 14:50:04.560672] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:22.087 [2024-12-11 14:50:04.656554] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:13:22.087 [2024-12-11 14:50:04.664553] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:13:22.087 [2024-12-11 14:50:04.672567] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:13:22.087 [2024-12-11 14:50:04.680585] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:13:22.087 [2024-12-11 14:50:04.709674] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:22.087 passed 00:13:22.087 Test: admin_create_io_sq_verify_pc ...[2024-12-11 14:50:04.790285] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:22.087 [2024-12-11 14:50:04.809572] vfio_user.c:2071:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:13:22.087 [2024-12-11 14:50:04.826743] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:22.087 passed 00:13:22.344 Test: admin_create_io_qp_max_qps ...[2024-12-11 14:50:04.909336] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:23.276 [2024-12-11 14:50:06.020564] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:13:23.841 [2024-12-11 14:50:06.393028] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:23.841 passed 00:13:23.841 Test: admin_create_io_sq_shared_cq ...[2024-12-11 14:50:06.475154] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:23.841 [2024-12-11 14:50:06.606558] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:24.100 [2024-12-11 14:50:06.643644] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:24.100 passed 00:13:24.100 00:13:24.100 Run Summary: Type Total Ran Passed Failed Inactive 00:13:24.100 suites 1 1 n/a 0 0 00:13:24.100 tests 18 18 18 0 0 00:13:24.100 asserts 360 360 360 0 n/a 00:13:24.100 00:13:24.100 Elapsed time = 1.561 seconds 00:13:24.100 14:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 650152 00:13:24.100 14:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 650152 ']' 00:13:24.100 14:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 650152 00:13:24.100 14:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:13:24.100 14:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:24.100 14:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 650152 00:13:24.100 14:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:24.100 14:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:24.100 14:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 650152' 00:13:24.100 killing process with pid 650152 00:13:24.100 14:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 650152 00:13:24.100 14:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 650152 00:13:24.358 14:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:13:24.358 14:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:13:24.358 00:13:24.358 real 0m5.886s 00:13:24.358 user 0m16.514s 00:13:24.358 sys 0m0.567s 00:13:24.358 14:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:24.358 14:50:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:24.358 ************************************ 00:13:24.358 END TEST nvmf_vfio_user_nvme_compliance 00:13:24.358 ************************************ 00:13:24.358 14:50:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:24.358 14:50:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:24.358 14:50:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:24.358 14:50:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:24.358 ************************************ 00:13:24.358 START TEST nvmf_vfio_user_fuzz 00:13:24.358 ************************************ 00:13:24.359 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:24.359 * Looking for test storage... 00:13:24.359 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:24.359 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:24.359 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:13:24.359 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:24.617 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:24.617 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:24.617 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:24.617 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:24.617 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:13:24.617 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:13:24.617 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:13:24.617 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:13:24.617 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:13:24.617 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:13:24.617 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:13:24.617 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:24.617 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:13:24.617 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:13:24.617 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:24.617 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:24.617 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:13:24.617 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:13:24.617 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:24.617 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:13:24.617 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:13:24.617 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:13:24.617 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:13:24.617 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:24.617 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:13:24.617 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:13:24.617 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:24.617 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:24.617 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:13:24.617 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:24.617 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:24.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:24.617 --rc genhtml_branch_coverage=1 00:13:24.617 --rc genhtml_function_coverage=1 00:13:24.617 --rc genhtml_legend=1 00:13:24.617 --rc geninfo_all_blocks=1 00:13:24.617 --rc geninfo_unexecuted_blocks=1 00:13:24.617 00:13:24.617 ' 00:13:24.618 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:24.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:24.618 --rc genhtml_branch_coverage=1 00:13:24.618 --rc genhtml_function_coverage=1 00:13:24.618 --rc genhtml_legend=1 00:13:24.618 --rc geninfo_all_blocks=1 00:13:24.618 --rc geninfo_unexecuted_blocks=1 00:13:24.618 00:13:24.618 ' 00:13:24.618 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:24.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:24.618 --rc genhtml_branch_coverage=1 00:13:24.618 --rc genhtml_function_coverage=1 00:13:24.618 --rc genhtml_legend=1 00:13:24.618 --rc geninfo_all_blocks=1 00:13:24.618 --rc geninfo_unexecuted_blocks=1 00:13:24.618 00:13:24.618 ' 00:13:24.618 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:24.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:24.618 --rc genhtml_branch_coverage=1 00:13:24.618 --rc genhtml_function_coverage=1 00:13:24.618 --rc genhtml_legend=1 00:13:24.618 --rc geninfo_all_blocks=1 00:13:24.618 --rc geninfo_unexecuted_blocks=1 00:13:24.618 00:13:24.618 ' 00:13:24.618 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:24.618 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:13:24.618 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:24.618 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:24.618 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:24.618 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:24.618 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:24.618 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:24.618 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:24.618 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:24.618 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:24.618 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:24.618 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:24.618 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:24.618 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:24.618 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:24.618 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:24.618 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:24.618 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:24.618 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:13:24.618 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:24.618 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:24.618 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:24.618 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.618 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.618 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.618 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:13:24.618 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.618 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:13:24.618 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:24.618 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:24.618 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:24.618 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:24.618 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:24.618 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:24.618 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:24.618 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:24.618 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:24.618 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:24.618 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:24.618 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:24.618 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:24.618 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:13:24.618 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:24.618 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:24.618 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:13:24.618 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=650894 00:13:24.618 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:24.618 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 650894' 00:13:24.618 Process pid: 650894 00:13:24.618 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:24.618 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 650894 00:13:24.618 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 650894 ']' 00:13:24.618 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:24.618 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:24.618 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:24.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:24.618 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:24.618 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:24.876 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:24.876 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:13:24.876 14:50:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:13:25.810 14:50:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:25.810 14:50:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.810 14:50:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:25.811 14:50:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.811 14:50:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:13:25.811 14:50:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:25.811 14:50:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.811 14:50:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:25.811 malloc0 00:13:25.811 14:50:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.811 14:50:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:13:25.811 14:50:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.811 14:50:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:25.811 14:50:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.811 14:50:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:25.811 14:50:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.811 14:50:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:25.811 14:50:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.811 14:50:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:25.811 14:50:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.811 14:50:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:25.811 14:50:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.811 14:50:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:13:25.811 14:50:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:13:57.881 Fuzzing completed. Shutting down the fuzz application 00:13:57.881 00:13:57.881 Dumping successful admin opcodes: 00:13:57.881 9, 10, 00:13:57.881 Dumping successful io opcodes: 00:13:57.881 0, 00:13:57.881 NS: 0x20000081ef00 I/O qp, Total commands completed: 692416, total successful commands: 2696, random_seed: 3628241472 00:13:57.881 NS: 0x20000081ef00 admin qp, Total commands completed: 170736, total successful commands: 40, random_seed: 2699236224 00:13:57.881 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:13:57.881 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.881 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:57.881 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.881 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 650894 00:13:57.881 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 650894 ']' 00:13:57.881 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 650894 00:13:57.881 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:13:57.881 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:57.881 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 650894 00:13:57.881 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:57.881 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:57.881 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 650894' 00:13:57.881 killing process with pid 650894 00:13:57.881 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 650894 00:13:57.881 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 650894 00:13:57.881 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:13:57.881 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:13:57.881 00:13:57.881 real 0m32.282s 00:13:57.881 user 0m33.816s 00:13:57.881 sys 0m26.775s 00:13:57.881 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:57.881 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:57.881 ************************************ 00:13:57.881 END TEST nvmf_vfio_user_fuzz 00:13:57.881 ************************************ 00:13:57.881 14:50:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:13:57.881 14:50:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:57.881 14:50:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:57.881 14:50:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:57.881 ************************************ 00:13:57.881 START TEST nvmf_auth_target 00:13:57.881 ************************************ 00:13:57.881 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:13:57.881 * Looking for test storage... 00:13:57.881 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:57.881 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:57.881 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:13:57.881 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:57.881 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:57.881 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:57.881 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:57.882 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:57.882 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:13:57.882 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:13:57.882 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:13:57.882 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:13:57.882 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:13:57.882 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:13:57.882 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:13:57.882 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:57.882 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:13:57.882 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:13:57.882 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:57.882 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:57.882 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:13:57.882 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:13:57.882 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:57.882 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:13:57.882 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:13:57.882 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:13:57.882 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:13:57.882 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:57.882 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:13:57.882 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:13:57.882 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:57.882 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:57.882 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:13:57.882 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:57.882 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:57.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:57.882 --rc genhtml_branch_coverage=1 00:13:57.882 --rc genhtml_function_coverage=1 00:13:57.882 --rc genhtml_legend=1 00:13:57.882 --rc geninfo_all_blocks=1 00:13:57.882 --rc geninfo_unexecuted_blocks=1 00:13:57.882 00:13:57.882 ' 00:13:57.882 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:57.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:57.882 --rc genhtml_branch_coverage=1 00:13:57.882 --rc genhtml_function_coverage=1 00:13:57.882 --rc genhtml_legend=1 00:13:57.882 --rc geninfo_all_blocks=1 00:13:57.882 --rc geninfo_unexecuted_blocks=1 00:13:57.882 00:13:57.882 ' 00:13:57.882 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:57.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:57.882 --rc genhtml_branch_coverage=1 00:13:57.882 --rc genhtml_function_coverage=1 00:13:57.882 --rc genhtml_legend=1 00:13:57.882 --rc geninfo_all_blocks=1 00:13:57.882 --rc geninfo_unexecuted_blocks=1 00:13:57.882 00:13:57.882 ' 00:13:57.882 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:57.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:57.882 --rc genhtml_branch_coverage=1 00:13:57.882 --rc genhtml_function_coverage=1 00:13:57.882 --rc genhtml_legend=1 00:13:57.882 --rc geninfo_all_blocks=1 00:13:57.882 --rc geninfo_unexecuted_blocks=1 00:13:57.882 00:13:57.882 ' 00:13:57.882 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:57.882 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:13:57.882 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:57.882 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:57.882 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:57.882 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:57.882 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:57.882 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:57.882 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:57.882 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:57.882 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:57.882 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:57.882 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:57.882 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:57.882 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:57.882 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:57.882 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:57.882 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:57.882 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:57.882 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:13:57.882 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:57.882 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:57.882 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:57.882 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.882 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.882 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.882 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:13:57.882 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.882 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:13:57.882 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:57.882 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:57.882 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:57.882 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:57.882 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:57.882 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:57.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:57.882 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:57.882 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:57.882 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:57.882 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:13:57.882 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:13:57.882 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:13:57.882 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:57.882 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:13:57.882 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:13:57.882 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:13:57.882 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:13:57.882 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:57.882 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:57.882 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:57.882 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:57.883 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:57.883 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:57.883 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:57.883 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:57.883 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:57.883 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:57.883 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:13:57.883 14:50:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.261 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:59.261 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:13:59.261 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:59.261 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:59.261 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:59.261 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:59.261 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:59.261 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:13:59.261 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:59.261 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:13:59.261 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:13:59.261 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:13:59.261 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:13:59.261 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:13:59.261 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:13:59.261 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:59.261 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:59.261 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:59.261 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:59.261 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:59.261 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:59.261 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:59.261 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:59.261 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:59.261 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:59.261 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:59.261 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:59.261 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:59.261 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:59.261 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:59.261 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:59.261 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:59.261 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:59.261 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:59.261 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:59.261 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:59.261 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:59.261 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:59.261 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:59.261 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:59.261 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:59.261 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:59.261 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:59.261 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:59.261 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:59.261 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:59.261 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:59.261 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:59.261 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:59.261 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:59.261 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:59.261 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:59.261 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:59.261 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:59.261 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:59.261 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:59.261 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:59.261 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:59.261 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:59.261 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:59.261 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:59.261 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:59.261 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:59.262 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:59.262 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:59.262 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:59.262 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:59.262 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:59.262 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:59.262 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:59.262 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:59.262 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:59.262 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:59.262 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:13:59.262 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:59.262 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:59.262 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:59.262 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:59.262 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:59.262 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:59.262 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:59.262 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:59.262 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:59.262 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:59.262 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:59.262 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:59.262 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:59.262 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:59.262 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:59.262 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:59.262 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:59.262 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:59.262 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:59.262 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:59.262 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:59.262 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:59.262 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:59.262 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:59.262 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:59.262 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:59.262 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:59.262 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:13:59.262 00:13:59.262 --- 10.0.0.2 ping statistics --- 00:13:59.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:59.262 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:13:59.262 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:59.262 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:59.262 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:13:59.262 00:13:59.262 --- 10.0.0.1 ping statistics --- 00:13:59.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:59.262 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:13:59.262 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:59.262 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:13:59.262 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:59.262 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:59.262 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:59.262 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:59.262 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:59.262 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:59.262 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:59.262 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:13:59.262 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:59.262 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:59.262 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.262 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=656343 00:13:59.262 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:13:59.262 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 656343 00:13:59.262 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 656343 ']' 00:13:59.262 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:59.262 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:59.262 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:59.262 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:59.262 14:50:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.521 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:59.521 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:13:59.521 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:59.521 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:59.521 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.521 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:59.521 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=656364 00:13:59.521 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:13:59.521 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:13:59.521 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:13:59.521 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:13:59.521 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:59.521 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:13:59.521 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:13:59.522 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:13:59.522 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:59.522 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=d8fe821dd41c52a660e27ba933afb7d2cff26d9888c952cc 00:13:59.522 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:13:59.522 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.qp2 00:13:59.522 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key d8fe821dd41c52a660e27ba933afb7d2cff26d9888c952cc 0 00:13:59.522 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 d8fe821dd41c52a660e27ba933afb7d2cff26d9888c952cc 0 00:13:59.522 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:13:59.522 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:13:59.522 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=d8fe821dd41c52a660e27ba933afb7d2cff26d9888c952cc 00:13:59.522 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:13:59.522 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:13:59.781 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.qp2 00:13:59.781 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.qp2 00:13:59.781 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.qp2 00:13:59.781 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:13:59.781 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:13:59.781 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:59.781 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:13:59.781 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:13:59.781 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:13:59.781 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:13:59.781 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=0a0b7decccf5bbdaf6dc3db516fbe9a32dda57d6e9f3ce4a05964e269af7cd1f 00:13:59.781 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:13:59.781 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.jtg 00:13:59.781 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 0a0b7decccf5bbdaf6dc3db516fbe9a32dda57d6e9f3ce4a05964e269af7cd1f 3 00:13:59.781 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 0a0b7decccf5bbdaf6dc3db516fbe9a32dda57d6e9f3ce4a05964e269af7cd1f 3 00:13:59.781 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:13:59.781 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:13:59.781 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=0a0b7decccf5bbdaf6dc3db516fbe9a32dda57d6e9f3ce4a05964e269af7cd1f 00:13:59.781 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:13:59.781 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:13:59.781 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.jtg 00:13:59.781 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.jtg 00:13:59.781 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.jtg 00:13:59.781 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:13:59.781 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:13:59.781 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:59.781 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:13:59.781 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:13:59.781 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:13:59.781 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:13:59.781 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=bacf272c51a2a4f658aa1a8e7f8e8994 00:13:59.781 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:13:59.781 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.5t5 00:13:59.781 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key bacf272c51a2a4f658aa1a8e7f8e8994 1 00:13:59.781 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 bacf272c51a2a4f658aa1a8e7f8e8994 1 00:13:59.781 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:13:59.781 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:13:59.781 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=bacf272c51a2a4f658aa1a8e7f8e8994 00:13:59.781 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:13:59.781 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:13:59.781 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.5t5 00:13:59.781 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.5t5 00:13:59.781 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.5t5 00:13:59.781 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:13:59.781 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:13:59.781 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:59.781 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:13:59.781 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:13:59.781 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:13:59.781 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:59.781 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=976aa6f39eff85958bd322c8065a6ae0c1c816fa83f36c7a 00:13:59.781 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:13:59.781 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.me4 00:13:59.781 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 976aa6f39eff85958bd322c8065a6ae0c1c816fa83f36c7a 2 00:13:59.781 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 976aa6f39eff85958bd322c8065a6ae0c1c816fa83f36c7a 2 00:13:59.781 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:13:59.781 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:13:59.781 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=976aa6f39eff85958bd322c8065a6ae0c1c816fa83f36c7a 00:13:59.781 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:13:59.781 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:13:59.781 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.me4 00:13:59.781 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.me4 00:13:59.781 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.me4 00:13:59.781 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:13:59.781 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:13:59.781 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:59.781 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:13:59.781 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:13:59.781 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:13:59.781 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:59.781 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=bb2ae1b8e6d17b8ea2e1f5c1ed49c124eb2186436ae2c34d 00:13:59.781 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:13:59.781 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Fc4 00:13:59.781 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key bb2ae1b8e6d17b8ea2e1f5c1ed49c124eb2186436ae2c34d 2 00:13:59.781 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 bb2ae1b8e6d17b8ea2e1f5c1ed49c124eb2186436ae2c34d 2 00:13:59.781 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:13:59.781 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:13:59.781 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=bb2ae1b8e6d17b8ea2e1f5c1ed49c124eb2186436ae2c34d 00:13:59.781 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:13:59.781 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:13:59.781 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Fc4 00:13:59.781 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Fc4 00:13:59.781 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.Fc4 00:13:59.781 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:13:59.781 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:13:59.781 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:59.781 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:13:59.781 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:13:59.781 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:13:59.781 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:13:59.781 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=1ae6269967cff2d2734cd1418553dc94 00:13:59.781 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:13:59.781 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.UwE 00:13:59.781 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 1ae6269967cff2d2734cd1418553dc94 1 00:13:59.781 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 1ae6269967cff2d2734cd1418553dc94 1 00:13:59.781 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:13:59.781 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:13:59.781 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=1ae6269967cff2d2734cd1418553dc94 00:13:59.782 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:13:59.782 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:00.040 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.UwE 00:14:00.040 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.UwE 00:14:00.040 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.UwE 00:14:00.040 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:14:00.040 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:00.040 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:00.040 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:00.040 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:14:00.040 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:14:00.040 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:00.040 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=e2336e8d35672fd036fb8477ee3a2fb74e75b0586cabc8496a51ba899a947391 00:14:00.040 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:14:00.040 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.LbG 00:14:00.040 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key e2336e8d35672fd036fb8477ee3a2fb74e75b0586cabc8496a51ba899a947391 3 00:14:00.040 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 e2336e8d35672fd036fb8477ee3a2fb74e75b0586cabc8496a51ba899a947391 3 00:14:00.040 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:00.040 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:00.040 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=e2336e8d35672fd036fb8477ee3a2fb74e75b0586cabc8496a51ba899a947391 00:14:00.040 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:14:00.040 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:00.040 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.LbG 00:14:00.040 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.LbG 00:14:00.040 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.LbG 00:14:00.040 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:14:00.040 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 656343 00:14:00.040 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 656343 ']' 00:14:00.040 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:00.040 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:00.040 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:00.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:00.040 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:00.040 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.298 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:00.298 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:14:00.298 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 656364 /var/tmp/host.sock 00:14:00.298 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 656364 ']' 00:14:00.298 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:14:00.298 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:00.298 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:00.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:00.298 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:00.298 14:50:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.556 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:00.556 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:14:00.556 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:14:00.556 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.556 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.556 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.556 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:00.556 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.qp2 00:14:00.556 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.556 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.556 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.556 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.qp2 00:14:00.556 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.qp2 00:14:00.814 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.jtg ]] 00:14:00.814 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.jtg 00:14:00.814 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.814 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.814 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.814 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.jtg 00:14:00.814 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.jtg 00:14:01.072 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:01.072 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.5t5 00:14:01.072 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.072 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.072 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.072 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.5t5 00:14:01.072 14:50:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.5t5 00:14:01.330 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.me4 ]] 00:14:01.330 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.me4 00:14:01.330 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.330 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.330 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.330 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.me4 00:14:01.330 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.me4 00:14:01.588 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:01.588 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Fc4 00:14:01.588 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.588 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.588 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.588 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.Fc4 00:14:01.588 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.Fc4 00:14:01.846 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.UwE ]] 00:14:01.846 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.UwE 00:14:01.846 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.846 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.846 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.846 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.UwE 00:14:01.846 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.UwE 00:14:02.126 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:02.127 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.LbG 00:14:02.127 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.127 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.127 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.127 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.LbG 00:14:02.127 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.LbG 00:14:02.465 14:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:14:02.465 14:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:14:02.465 14:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:02.465 14:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:02.465 14:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:02.465 14:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:02.739 14:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:14:02.739 14:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:02.739 14:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:02.739 14:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:02.739 14:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:02.739 14:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:02.739 14:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:02.739 14:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.739 14:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.739 14:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.739 14:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:02.739 14:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:02.739 14:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:02.997 00:14:02.997 14:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:02.997 14:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:02.997 14:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:03.255 14:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:03.255 14:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:03.255 14:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.255 14:50:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.255 14:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.255 14:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:03.255 { 00:14:03.255 "cntlid": 1, 00:14:03.255 "qid": 0, 00:14:03.255 "state": "enabled", 00:14:03.255 "thread": "nvmf_tgt_poll_group_000", 00:14:03.255 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:03.255 "listen_address": { 00:14:03.255 "trtype": "TCP", 00:14:03.255 "adrfam": "IPv4", 00:14:03.255 "traddr": "10.0.0.2", 00:14:03.255 "trsvcid": "4420" 00:14:03.255 }, 00:14:03.255 "peer_address": { 00:14:03.255 "trtype": "TCP", 00:14:03.255 "adrfam": "IPv4", 00:14:03.255 "traddr": "10.0.0.1", 00:14:03.255 "trsvcid": "47376" 00:14:03.255 }, 00:14:03.255 "auth": { 00:14:03.255 "state": "completed", 00:14:03.255 "digest": "sha256", 00:14:03.255 "dhgroup": "null" 00:14:03.255 } 00:14:03.255 } 00:14:03.255 ]' 00:14:03.255 14:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:03.513 14:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:03.513 14:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:03.513 14:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:03.513 14:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:03.513 14:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:03.513 14:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:03.513 14:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:03.771 14:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDhmZTgyMWRkNDFjNTJhNjYwZTI3YmE5MzNhZmI3ZDJjZmYyNmQ5ODg4Yzk1MmNjJ1aojg==: --dhchap-ctrl-secret DHHC-1:03:MGEwYjdkZWNjY2Y1YmJkYWY2ZGMzZGI1MTZmYmU5YTMyZGRhNTdkNmU5ZjNjZTRhMDU5NjRlMjY5YWY3Y2QxZit99ZI=: 00:14:03.771 14:50:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZDhmZTgyMWRkNDFjNTJhNjYwZTI3YmE5MzNhZmI3ZDJjZmYyNmQ5ODg4Yzk1MmNjJ1aojg==: --dhchap-ctrl-secret DHHC-1:03:MGEwYjdkZWNjY2Y1YmJkYWY2ZGMzZGI1MTZmYmU5YTMyZGRhNTdkNmU5ZjNjZTRhMDU5NjRlMjY5YWY3Y2QxZit99ZI=: 00:14:04.704 14:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:04.705 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:04.705 14:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:04.705 14:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.705 14:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.705 14:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.705 14:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:04.705 14:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:04.705 14:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:04.963 14:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:14:04.963 14:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:04.963 14:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:04.963 14:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:04.963 14:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:04.963 14:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:04.963 14:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:04.963 14:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.963 14:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.963 14:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.963 14:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:04.963 14:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:04.963 14:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:05.221 00:14:05.221 14:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:05.221 14:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:05.221 14:50:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:05.478 14:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:05.478 14:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:05.478 14:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.478 14:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.478 14:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.478 14:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:05.478 { 00:14:05.478 "cntlid": 3, 00:14:05.478 "qid": 0, 00:14:05.478 "state": "enabled", 00:14:05.478 "thread": "nvmf_tgt_poll_group_000", 00:14:05.478 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:05.478 "listen_address": { 00:14:05.478 "trtype": "TCP", 00:14:05.478 "adrfam": "IPv4", 00:14:05.478 "traddr": "10.0.0.2", 00:14:05.478 "trsvcid": "4420" 00:14:05.478 }, 00:14:05.478 "peer_address": { 00:14:05.478 "trtype": "TCP", 00:14:05.478 "adrfam": "IPv4", 00:14:05.478 "traddr": "10.0.0.1", 00:14:05.478 "trsvcid": "47396" 00:14:05.478 }, 00:14:05.478 "auth": { 00:14:05.478 "state": "completed", 00:14:05.478 "digest": "sha256", 00:14:05.478 "dhgroup": "null" 00:14:05.478 } 00:14:05.478 } 00:14:05.478 ]' 00:14:05.478 14:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:05.478 14:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:05.478 14:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:05.478 14:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:05.478 14:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:05.736 14:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:05.736 14:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:05.736 14:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:05.993 14:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmFjZjI3MmM1MWEyYTRmNjU4YWExYThlN2Y4ZTg5OTTLxYpf: --dhchap-ctrl-secret DHHC-1:02:OTc2YWE2ZjM5ZWZmODU5NThiZDMyMmM4MDY1YTZhZTBjMWM4MTZmYTgzZjM2YzdhdNA7MQ==: 00:14:05.993 14:50:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YmFjZjI3MmM1MWEyYTRmNjU4YWExYThlN2Y4ZTg5OTTLxYpf: --dhchap-ctrl-secret DHHC-1:02:OTc2YWE2ZjM5ZWZmODU5NThiZDMyMmM4MDY1YTZhZTBjMWM4MTZmYTgzZjM2YzdhdNA7MQ==: 00:14:06.924 14:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:06.924 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:06.924 14:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:06.924 14:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.924 14:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.924 14:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.924 14:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:06.924 14:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:06.924 14:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:07.182 14:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:14:07.182 14:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:07.182 14:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:07.182 14:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:07.182 14:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:07.182 14:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:07.182 14:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:07.182 14:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.182 14:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.182 14:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.182 14:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:07.182 14:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:07.182 14:50:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:07.441 00:14:07.441 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:07.441 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:07.441 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:07.699 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:07.699 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:07.699 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.699 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.699 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.699 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:07.699 { 00:14:07.699 "cntlid": 5, 00:14:07.699 "qid": 0, 00:14:07.699 "state": "enabled", 00:14:07.699 "thread": "nvmf_tgt_poll_group_000", 00:14:07.699 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:07.699 "listen_address": { 00:14:07.699 "trtype": "TCP", 00:14:07.699 "adrfam": "IPv4", 00:14:07.699 "traddr": "10.0.0.2", 00:14:07.699 "trsvcid": "4420" 00:14:07.699 }, 00:14:07.699 "peer_address": { 00:14:07.699 "trtype": "TCP", 00:14:07.699 "adrfam": "IPv4", 00:14:07.699 "traddr": "10.0.0.1", 00:14:07.699 "trsvcid": "47424" 00:14:07.699 }, 00:14:07.699 "auth": { 00:14:07.699 "state": "completed", 00:14:07.699 "digest": "sha256", 00:14:07.699 "dhgroup": "null" 00:14:07.699 } 00:14:07.699 } 00:14:07.699 ]' 00:14:07.699 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:07.699 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:07.699 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:07.957 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:07.957 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:07.957 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:07.957 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:07.957 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:08.214 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmIyYWUxYjhlNmQxN2I4ZWEyZTFmNWMxZWQ0OWMxMjRlYjIxODY0MzZhZTJjMzRkY7WfSg==: --dhchap-ctrl-secret DHHC-1:01:MWFlNjI2OTk2N2NmZjJkMjczNGNkMTQxODU1M2RjOTS+229+: 00:14:08.215 14:50:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YmIyYWUxYjhlNmQxN2I4ZWEyZTFmNWMxZWQ0OWMxMjRlYjIxODY0MzZhZTJjMzRkY7WfSg==: --dhchap-ctrl-secret DHHC-1:01:MWFlNjI2OTk2N2NmZjJkMjczNGNkMTQxODU1M2RjOTS+229+: 00:14:09.147 14:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:09.147 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:09.147 14:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:09.147 14:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.147 14:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.147 14:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.147 14:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:09.147 14:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:09.147 14:50:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:09.405 14:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:14:09.405 14:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:09.405 14:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:09.405 14:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:09.405 14:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:09.405 14:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:09.405 14:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:09.405 14:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.405 14:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.405 14:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.405 14:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:09.405 14:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:09.405 14:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:09.663 00:14:09.663 14:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:09.663 14:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:09.663 14:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:09.921 14:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:09.921 14:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:09.921 14:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.921 14:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.921 14:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.921 14:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:09.921 { 00:14:09.921 "cntlid": 7, 00:14:09.921 "qid": 0, 00:14:09.921 "state": "enabled", 00:14:09.921 "thread": "nvmf_tgt_poll_group_000", 00:14:09.921 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:09.921 "listen_address": { 00:14:09.921 "trtype": "TCP", 00:14:09.921 "adrfam": "IPv4", 00:14:09.921 "traddr": "10.0.0.2", 00:14:09.921 "trsvcid": "4420" 00:14:09.921 }, 00:14:09.921 "peer_address": { 00:14:09.921 "trtype": "TCP", 00:14:09.921 "adrfam": "IPv4", 00:14:09.921 "traddr": "10.0.0.1", 00:14:09.921 "trsvcid": "44182" 00:14:09.921 }, 00:14:09.921 "auth": { 00:14:09.921 "state": "completed", 00:14:09.921 "digest": "sha256", 00:14:09.921 "dhgroup": "null" 00:14:09.921 } 00:14:09.921 } 00:14:09.921 ]' 00:14:09.921 14:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:09.921 14:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:09.921 14:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:10.179 14:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:10.179 14:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:10.179 14:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:10.179 14:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:10.179 14:50:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:10.437 14:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTIzMzZlOGQzNTY3MmZkMDM2ZmI4NDc3ZWUzYTJmYjc0ZTc1YjA1ODZjYWJjODQ5NmE1MWJhODk5YTk0NzM5Mfa5be0=: 00:14:10.437 14:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZTIzMzZlOGQzNTY3MmZkMDM2ZmI4NDc3ZWUzYTJmYjc0ZTc1YjA1ODZjYWJjODQ5NmE1MWJhODk5YTk0NzM5Mfa5be0=: 00:14:11.370 14:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:11.370 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:11.370 14:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:11.370 14:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.370 14:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.370 14:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.370 14:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:11.370 14:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:11.370 14:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:11.370 14:50:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:11.628 14:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:14:11.628 14:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:11.628 14:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:11.628 14:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:11.628 14:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:11.628 14:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:11.628 14:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:11.628 14:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.628 14:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.628 14:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.628 14:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:11.628 14:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:11.628 14:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:11.886 00:14:11.886 14:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:11.886 14:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:11.886 14:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:12.143 14:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:12.143 14:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:12.143 14:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.143 14:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.143 14:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.143 14:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:12.143 { 00:14:12.143 "cntlid": 9, 00:14:12.143 "qid": 0, 00:14:12.143 "state": "enabled", 00:14:12.143 "thread": "nvmf_tgt_poll_group_000", 00:14:12.143 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:12.143 "listen_address": { 00:14:12.143 "trtype": "TCP", 00:14:12.143 "adrfam": "IPv4", 00:14:12.143 "traddr": "10.0.0.2", 00:14:12.143 "trsvcid": "4420" 00:14:12.143 }, 00:14:12.143 "peer_address": { 00:14:12.143 "trtype": "TCP", 00:14:12.143 "adrfam": "IPv4", 00:14:12.143 "traddr": "10.0.0.1", 00:14:12.143 "trsvcid": "44228" 00:14:12.143 }, 00:14:12.143 "auth": { 00:14:12.143 "state": "completed", 00:14:12.143 "digest": "sha256", 00:14:12.143 "dhgroup": "ffdhe2048" 00:14:12.143 } 00:14:12.143 } 00:14:12.143 ]' 00:14:12.143 14:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:12.401 14:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:12.401 14:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:12.401 14:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:12.401 14:50:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:12.401 14:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:12.401 14:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:12.401 14:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:12.658 14:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDhmZTgyMWRkNDFjNTJhNjYwZTI3YmE5MzNhZmI3ZDJjZmYyNmQ5ODg4Yzk1MmNjJ1aojg==: --dhchap-ctrl-secret DHHC-1:03:MGEwYjdkZWNjY2Y1YmJkYWY2ZGMzZGI1MTZmYmU5YTMyZGRhNTdkNmU5ZjNjZTRhMDU5NjRlMjY5YWY3Y2QxZit99ZI=: 00:14:12.658 14:50:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZDhmZTgyMWRkNDFjNTJhNjYwZTI3YmE5MzNhZmI3ZDJjZmYyNmQ5ODg4Yzk1MmNjJ1aojg==: --dhchap-ctrl-secret DHHC-1:03:MGEwYjdkZWNjY2Y1YmJkYWY2ZGMzZGI1MTZmYmU5YTMyZGRhNTdkNmU5ZjNjZTRhMDU5NjRlMjY5YWY3Y2QxZit99ZI=: 00:14:13.591 14:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:13.591 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:13.591 14:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:13.591 14:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.591 14:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.591 14:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.591 14:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:13.591 14:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:13.591 14:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:13.849 14:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:14:13.849 14:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:13.849 14:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:13.849 14:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:13.849 14:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:13.849 14:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:13.849 14:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:13.849 14:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.849 14:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.849 14:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.849 14:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:13.849 14:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:13.849 14:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:14.107 00:14:14.107 14:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:14.107 14:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:14.107 14:50:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:14.365 14:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:14.365 14:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:14.365 14:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.365 14:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.365 14:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.365 14:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:14.365 { 00:14:14.365 "cntlid": 11, 00:14:14.365 "qid": 0, 00:14:14.365 "state": "enabled", 00:14:14.365 "thread": "nvmf_tgt_poll_group_000", 00:14:14.365 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:14.365 "listen_address": { 00:14:14.365 "trtype": "TCP", 00:14:14.365 "adrfam": "IPv4", 00:14:14.365 "traddr": "10.0.0.2", 00:14:14.365 "trsvcid": "4420" 00:14:14.365 }, 00:14:14.365 "peer_address": { 00:14:14.365 "trtype": "TCP", 00:14:14.365 "adrfam": "IPv4", 00:14:14.365 "traddr": "10.0.0.1", 00:14:14.365 "trsvcid": "44270" 00:14:14.365 }, 00:14:14.365 "auth": { 00:14:14.365 "state": "completed", 00:14:14.365 "digest": "sha256", 00:14:14.365 "dhgroup": "ffdhe2048" 00:14:14.365 } 00:14:14.365 } 00:14:14.365 ]' 00:14:14.365 14:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:14.623 14:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:14.623 14:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:14.623 14:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:14.623 14:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:14.623 14:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:14.623 14:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:14.623 14:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:14.881 14:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmFjZjI3MmM1MWEyYTRmNjU4YWExYThlN2Y4ZTg5OTTLxYpf: --dhchap-ctrl-secret DHHC-1:02:OTc2YWE2ZjM5ZWZmODU5NThiZDMyMmM4MDY1YTZhZTBjMWM4MTZmYTgzZjM2YzdhdNA7MQ==: 00:14:14.881 14:50:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YmFjZjI3MmM1MWEyYTRmNjU4YWExYThlN2Y4ZTg5OTTLxYpf: --dhchap-ctrl-secret DHHC-1:02:OTc2YWE2ZjM5ZWZmODU5NThiZDMyMmM4MDY1YTZhZTBjMWM4MTZmYTgzZjM2YzdhdNA7MQ==: 00:14:15.814 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:15.814 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:15.814 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:15.814 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.814 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.814 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.814 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:15.814 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:15.814 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:16.072 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:14:16.072 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:16.072 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:16.072 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:16.072 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:16.072 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:16.072 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:16.072 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.072 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.072 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.072 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:16.072 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:16.072 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:16.330 00:14:16.330 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:16.330 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:16.330 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:16.588 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:16.588 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:16.588 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.588 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.588 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.588 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:16.588 { 00:14:16.588 "cntlid": 13, 00:14:16.588 "qid": 0, 00:14:16.588 "state": "enabled", 00:14:16.588 "thread": "nvmf_tgt_poll_group_000", 00:14:16.588 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:16.588 "listen_address": { 00:14:16.588 "trtype": "TCP", 00:14:16.588 "adrfam": "IPv4", 00:14:16.588 "traddr": "10.0.0.2", 00:14:16.588 "trsvcid": "4420" 00:14:16.588 }, 00:14:16.588 "peer_address": { 00:14:16.588 "trtype": "TCP", 00:14:16.588 "adrfam": "IPv4", 00:14:16.588 "traddr": "10.0.0.1", 00:14:16.588 "trsvcid": "44298" 00:14:16.588 }, 00:14:16.588 "auth": { 00:14:16.588 "state": "completed", 00:14:16.588 "digest": "sha256", 00:14:16.588 "dhgroup": "ffdhe2048" 00:14:16.588 } 00:14:16.588 } 00:14:16.588 ]' 00:14:16.588 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:16.846 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:16.846 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:16.846 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:16.846 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:16.846 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:16.846 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:16.846 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:17.104 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmIyYWUxYjhlNmQxN2I4ZWEyZTFmNWMxZWQ0OWMxMjRlYjIxODY0MzZhZTJjMzRkY7WfSg==: --dhchap-ctrl-secret DHHC-1:01:MWFlNjI2OTk2N2NmZjJkMjczNGNkMTQxODU1M2RjOTS+229+: 00:14:17.104 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YmIyYWUxYjhlNmQxN2I4ZWEyZTFmNWMxZWQ0OWMxMjRlYjIxODY0MzZhZTJjMzRkY7WfSg==: --dhchap-ctrl-secret DHHC-1:01:MWFlNjI2OTk2N2NmZjJkMjczNGNkMTQxODU1M2RjOTS+229+: 00:14:18.037 14:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:18.037 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:18.037 14:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:18.037 14:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.037 14:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.037 14:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.037 14:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:18.037 14:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:18.038 14:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:18.295 14:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:14:18.295 14:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:18.295 14:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:18.295 14:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:18.295 14:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:18.295 14:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:18.295 14:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:18.295 14:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.295 14:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.295 14:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.295 14:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:18.295 14:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:18.295 14:51:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:18.860 00:14:18.860 14:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:18.860 14:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:18.860 14:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:18.860 14:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:18.860 14:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:18.860 14:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.860 14:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.118 14:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.118 14:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:19.118 { 00:14:19.118 "cntlid": 15, 00:14:19.118 "qid": 0, 00:14:19.118 "state": "enabled", 00:14:19.118 "thread": "nvmf_tgt_poll_group_000", 00:14:19.118 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:19.118 "listen_address": { 00:14:19.118 "trtype": "TCP", 00:14:19.118 "adrfam": "IPv4", 00:14:19.118 "traddr": "10.0.0.2", 00:14:19.118 "trsvcid": "4420" 00:14:19.118 }, 00:14:19.118 "peer_address": { 00:14:19.118 "trtype": "TCP", 00:14:19.118 "adrfam": "IPv4", 00:14:19.118 "traddr": "10.0.0.1", 00:14:19.118 "trsvcid": "44322" 00:14:19.118 }, 00:14:19.118 "auth": { 00:14:19.118 "state": "completed", 00:14:19.118 "digest": "sha256", 00:14:19.118 "dhgroup": "ffdhe2048" 00:14:19.118 } 00:14:19.118 } 00:14:19.118 ]' 00:14:19.118 14:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:19.118 14:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:19.118 14:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:19.118 14:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:19.118 14:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:19.118 14:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:19.118 14:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:19.118 14:51:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:19.376 14:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTIzMzZlOGQzNTY3MmZkMDM2ZmI4NDc3ZWUzYTJmYjc0ZTc1YjA1ODZjYWJjODQ5NmE1MWJhODk5YTk0NzM5Mfa5be0=: 00:14:19.376 14:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZTIzMzZlOGQzNTY3MmZkMDM2ZmI4NDc3ZWUzYTJmYjc0ZTc1YjA1ODZjYWJjODQ5NmE1MWJhODk5YTk0NzM5Mfa5be0=: 00:14:20.309 14:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:20.309 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:20.309 14:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:20.309 14:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.309 14:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.309 14:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.309 14:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:20.309 14:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:20.309 14:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:20.309 14:51:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:20.567 14:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:14:20.567 14:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:20.567 14:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:20.567 14:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:20.567 14:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:20.567 14:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:20.567 14:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:20.567 14:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.567 14:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.567 14:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.567 14:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:20.567 14:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:20.567 14:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:21.133 00:14:21.133 14:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:21.133 14:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:21.133 14:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:21.391 14:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:21.391 14:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:21.391 14:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.391 14:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.391 14:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.391 14:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:21.391 { 00:14:21.391 "cntlid": 17, 00:14:21.391 "qid": 0, 00:14:21.391 "state": "enabled", 00:14:21.391 "thread": "nvmf_tgt_poll_group_000", 00:14:21.391 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:21.391 "listen_address": { 00:14:21.391 "trtype": "TCP", 00:14:21.391 "adrfam": "IPv4", 00:14:21.391 "traddr": "10.0.0.2", 00:14:21.391 "trsvcid": "4420" 00:14:21.391 }, 00:14:21.391 "peer_address": { 00:14:21.391 "trtype": "TCP", 00:14:21.391 "adrfam": "IPv4", 00:14:21.391 "traddr": "10.0.0.1", 00:14:21.391 "trsvcid": "45438" 00:14:21.391 }, 00:14:21.391 "auth": { 00:14:21.391 "state": "completed", 00:14:21.391 "digest": "sha256", 00:14:21.391 "dhgroup": "ffdhe3072" 00:14:21.391 } 00:14:21.391 } 00:14:21.391 ]' 00:14:21.391 14:51:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:21.391 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:21.391 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:21.391 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:21.391 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:21.391 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:21.391 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:21.391 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:21.649 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDhmZTgyMWRkNDFjNTJhNjYwZTI3YmE5MzNhZmI3ZDJjZmYyNmQ5ODg4Yzk1MmNjJ1aojg==: --dhchap-ctrl-secret DHHC-1:03:MGEwYjdkZWNjY2Y1YmJkYWY2ZGMzZGI1MTZmYmU5YTMyZGRhNTdkNmU5ZjNjZTRhMDU5NjRlMjY5YWY3Y2QxZit99ZI=: 00:14:21.649 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZDhmZTgyMWRkNDFjNTJhNjYwZTI3YmE5MzNhZmI3ZDJjZmYyNmQ5ODg4Yzk1MmNjJ1aojg==: --dhchap-ctrl-secret DHHC-1:03:MGEwYjdkZWNjY2Y1YmJkYWY2ZGMzZGI1MTZmYmU5YTMyZGRhNTdkNmU5ZjNjZTRhMDU5NjRlMjY5YWY3Y2QxZit99ZI=: 00:14:22.583 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:22.583 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:22.583 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:22.583 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.583 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.583 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.583 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:22.583 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:22.583 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:22.841 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:14:22.841 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:22.841 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:22.841 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:22.841 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:22.841 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:22.841 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:22.841 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.841 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.841 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.841 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:22.841 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:22.841 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:23.407 00:14:23.407 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:23.407 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:23.407 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:23.670 14:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:23.670 14:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:23.670 14:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.670 14:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.670 14:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.670 14:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:23.670 { 00:14:23.670 "cntlid": 19, 00:14:23.670 "qid": 0, 00:14:23.670 "state": "enabled", 00:14:23.670 "thread": "nvmf_tgt_poll_group_000", 00:14:23.670 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:23.670 "listen_address": { 00:14:23.670 "trtype": "TCP", 00:14:23.670 "adrfam": "IPv4", 00:14:23.670 "traddr": "10.0.0.2", 00:14:23.670 "trsvcid": "4420" 00:14:23.670 }, 00:14:23.670 "peer_address": { 00:14:23.670 "trtype": "TCP", 00:14:23.670 "adrfam": "IPv4", 00:14:23.670 "traddr": "10.0.0.1", 00:14:23.670 "trsvcid": "45456" 00:14:23.670 }, 00:14:23.670 "auth": { 00:14:23.670 "state": "completed", 00:14:23.670 "digest": "sha256", 00:14:23.670 "dhgroup": "ffdhe3072" 00:14:23.670 } 00:14:23.670 } 00:14:23.670 ]' 00:14:23.670 14:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:23.670 14:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:23.670 14:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:23.670 14:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:23.670 14:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:23.670 14:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:23.670 14:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:23.670 14:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:23.929 14:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmFjZjI3MmM1MWEyYTRmNjU4YWExYThlN2Y4ZTg5OTTLxYpf: --dhchap-ctrl-secret DHHC-1:02:OTc2YWE2ZjM5ZWZmODU5NThiZDMyMmM4MDY1YTZhZTBjMWM4MTZmYTgzZjM2YzdhdNA7MQ==: 00:14:23.929 14:51:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YmFjZjI3MmM1MWEyYTRmNjU4YWExYThlN2Y4ZTg5OTTLxYpf: --dhchap-ctrl-secret DHHC-1:02:OTc2YWE2ZjM5ZWZmODU5NThiZDMyMmM4MDY1YTZhZTBjMWM4MTZmYTgzZjM2YzdhdNA7MQ==: 00:14:24.862 14:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:24.862 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:24.862 14:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:24.862 14:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.862 14:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.862 14:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.862 14:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:24.862 14:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:24.862 14:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:25.120 14:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:14:25.120 14:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:25.120 14:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:25.120 14:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:25.120 14:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:25.120 14:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:25.120 14:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:25.120 14:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.120 14:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.120 14:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.120 14:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:25.120 14:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:25.120 14:51:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:25.685 00:14:25.685 14:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:25.685 14:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:25.685 14:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:25.943 14:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:25.943 14:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:25.943 14:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.943 14:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.943 14:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.943 14:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:25.943 { 00:14:25.943 "cntlid": 21, 00:14:25.943 "qid": 0, 00:14:25.943 "state": "enabled", 00:14:25.943 "thread": "nvmf_tgt_poll_group_000", 00:14:25.943 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:25.943 "listen_address": { 00:14:25.943 "trtype": "TCP", 00:14:25.943 "adrfam": "IPv4", 00:14:25.943 "traddr": "10.0.0.2", 00:14:25.943 "trsvcid": "4420" 00:14:25.943 }, 00:14:25.943 "peer_address": { 00:14:25.943 "trtype": "TCP", 00:14:25.943 "adrfam": "IPv4", 00:14:25.943 "traddr": "10.0.0.1", 00:14:25.943 "trsvcid": "45470" 00:14:25.943 }, 00:14:25.943 "auth": { 00:14:25.943 "state": "completed", 00:14:25.943 "digest": "sha256", 00:14:25.943 "dhgroup": "ffdhe3072" 00:14:25.943 } 00:14:25.943 } 00:14:25.943 ]' 00:14:25.943 14:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:25.943 14:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:25.943 14:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:25.943 14:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:25.943 14:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:25.943 14:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:25.943 14:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:25.943 14:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:26.201 14:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmIyYWUxYjhlNmQxN2I4ZWEyZTFmNWMxZWQ0OWMxMjRlYjIxODY0MzZhZTJjMzRkY7WfSg==: --dhchap-ctrl-secret DHHC-1:01:MWFlNjI2OTk2N2NmZjJkMjczNGNkMTQxODU1M2RjOTS+229+: 00:14:26.201 14:51:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YmIyYWUxYjhlNmQxN2I4ZWEyZTFmNWMxZWQ0OWMxMjRlYjIxODY0MzZhZTJjMzRkY7WfSg==: --dhchap-ctrl-secret DHHC-1:01:MWFlNjI2OTk2N2NmZjJkMjczNGNkMTQxODU1M2RjOTS+229+: 00:14:27.134 14:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:27.134 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:27.134 14:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:27.134 14:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.134 14:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.134 14:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.134 14:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:27.134 14:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:27.134 14:51:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:27.392 14:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:14:27.392 14:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:27.392 14:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:27.392 14:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:27.392 14:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:27.392 14:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:27.392 14:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:27.392 14:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.392 14:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.392 14:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.392 14:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:27.392 14:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:27.392 14:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:27.650 00:14:27.650 14:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:27.650 14:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:27.650 14:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:27.908 14:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:27.908 14:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:27.908 14:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.908 14:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.166 14:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.166 14:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:28.166 { 00:14:28.166 "cntlid": 23, 00:14:28.166 "qid": 0, 00:14:28.166 "state": "enabled", 00:14:28.166 "thread": "nvmf_tgt_poll_group_000", 00:14:28.166 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:28.166 "listen_address": { 00:14:28.166 "trtype": "TCP", 00:14:28.166 "adrfam": "IPv4", 00:14:28.166 "traddr": "10.0.0.2", 00:14:28.166 "trsvcid": "4420" 00:14:28.166 }, 00:14:28.166 "peer_address": { 00:14:28.166 "trtype": "TCP", 00:14:28.166 "adrfam": "IPv4", 00:14:28.166 "traddr": "10.0.0.1", 00:14:28.166 "trsvcid": "45484" 00:14:28.166 }, 00:14:28.166 "auth": { 00:14:28.166 "state": "completed", 00:14:28.166 "digest": "sha256", 00:14:28.166 "dhgroup": "ffdhe3072" 00:14:28.166 } 00:14:28.166 } 00:14:28.166 ]' 00:14:28.166 14:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:28.166 14:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:28.166 14:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:28.166 14:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:28.166 14:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:28.166 14:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:28.166 14:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:28.166 14:51:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:28.424 14:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTIzMzZlOGQzNTY3MmZkMDM2ZmI4NDc3ZWUzYTJmYjc0ZTc1YjA1ODZjYWJjODQ5NmE1MWJhODk5YTk0NzM5Mfa5be0=: 00:14:28.424 14:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZTIzMzZlOGQzNTY3MmZkMDM2ZmI4NDc3ZWUzYTJmYjc0ZTc1YjA1ODZjYWJjODQ5NmE1MWJhODk5YTk0NzM5Mfa5be0=: 00:14:29.356 14:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:29.356 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:29.356 14:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:29.356 14:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.356 14:51:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.356 14:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.356 14:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:29.356 14:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:29.356 14:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:29.356 14:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:29.614 14:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:14:29.614 14:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:29.614 14:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:29.614 14:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:29.614 14:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:29.614 14:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:29.614 14:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:29.614 14:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.614 14:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.614 14:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.614 14:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:29.614 14:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:29.614 14:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:30.179 00:14:30.179 14:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:30.179 14:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:30.179 14:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:30.179 14:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:30.179 14:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:30.179 14:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.179 14:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.437 14:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.437 14:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:30.437 { 00:14:30.437 "cntlid": 25, 00:14:30.437 "qid": 0, 00:14:30.437 "state": "enabled", 00:14:30.437 "thread": "nvmf_tgt_poll_group_000", 00:14:30.437 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:30.437 "listen_address": { 00:14:30.437 "trtype": "TCP", 00:14:30.437 "adrfam": "IPv4", 00:14:30.437 "traddr": "10.0.0.2", 00:14:30.437 "trsvcid": "4420" 00:14:30.437 }, 00:14:30.437 "peer_address": { 00:14:30.437 "trtype": "TCP", 00:14:30.437 "adrfam": "IPv4", 00:14:30.437 "traddr": "10.0.0.1", 00:14:30.437 "trsvcid": "40186" 00:14:30.437 }, 00:14:30.437 "auth": { 00:14:30.437 "state": "completed", 00:14:30.437 "digest": "sha256", 00:14:30.437 "dhgroup": "ffdhe4096" 00:14:30.437 } 00:14:30.437 } 00:14:30.437 ]' 00:14:30.437 14:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:30.437 14:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:30.437 14:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:30.437 14:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:30.437 14:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:30.437 14:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:30.437 14:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:30.437 14:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:30.694 14:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDhmZTgyMWRkNDFjNTJhNjYwZTI3YmE5MzNhZmI3ZDJjZmYyNmQ5ODg4Yzk1MmNjJ1aojg==: --dhchap-ctrl-secret DHHC-1:03:MGEwYjdkZWNjY2Y1YmJkYWY2ZGMzZGI1MTZmYmU5YTMyZGRhNTdkNmU5ZjNjZTRhMDU5NjRlMjY5YWY3Y2QxZit99ZI=: 00:14:30.694 14:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZDhmZTgyMWRkNDFjNTJhNjYwZTI3YmE5MzNhZmI3ZDJjZmYyNmQ5ODg4Yzk1MmNjJ1aojg==: --dhchap-ctrl-secret DHHC-1:03:MGEwYjdkZWNjY2Y1YmJkYWY2ZGMzZGI1MTZmYmU5YTMyZGRhNTdkNmU5ZjNjZTRhMDU5NjRlMjY5YWY3Y2QxZit99ZI=: 00:14:31.625 14:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:31.625 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:31.625 14:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:31.625 14:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.625 14:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.625 14:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.625 14:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:31.625 14:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:31.625 14:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:31.913 14:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:14:31.913 14:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:31.913 14:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:31.913 14:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:31.913 14:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:31.913 14:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:31.913 14:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:31.913 14:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.913 14:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.913 14:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.913 14:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:31.913 14:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:31.913 14:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:32.195 00:14:32.195 14:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:32.195 14:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:32.195 14:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:32.454 14:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:32.454 14:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:32.454 14:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.454 14:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.454 14:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.454 14:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:32.454 { 00:14:32.454 "cntlid": 27, 00:14:32.454 "qid": 0, 00:14:32.454 "state": "enabled", 00:14:32.454 "thread": "nvmf_tgt_poll_group_000", 00:14:32.454 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:32.454 "listen_address": { 00:14:32.454 "trtype": "TCP", 00:14:32.454 "adrfam": "IPv4", 00:14:32.454 "traddr": "10.0.0.2", 00:14:32.454 "trsvcid": "4420" 00:14:32.454 }, 00:14:32.454 "peer_address": { 00:14:32.454 "trtype": "TCP", 00:14:32.454 "adrfam": "IPv4", 00:14:32.454 "traddr": "10.0.0.1", 00:14:32.454 "trsvcid": "40212" 00:14:32.454 }, 00:14:32.454 "auth": { 00:14:32.454 "state": "completed", 00:14:32.454 "digest": "sha256", 00:14:32.454 "dhgroup": "ffdhe4096" 00:14:32.454 } 00:14:32.454 } 00:14:32.454 ]' 00:14:32.454 14:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:32.717 14:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:32.717 14:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:32.717 14:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:32.717 14:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:32.717 14:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:32.717 14:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:32.717 14:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:32.975 14:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmFjZjI3MmM1MWEyYTRmNjU4YWExYThlN2Y4ZTg5OTTLxYpf: --dhchap-ctrl-secret DHHC-1:02:OTc2YWE2ZjM5ZWZmODU5NThiZDMyMmM4MDY1YTZhZTBjMWM4MTZmYTgzZjM2YzdhdNA7MQ==: 00:14:32.975 14:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YmFjZjI3MmM1MWEyYTRmNjU4YWExYThlN2Y4ZTg5OTTLxYpf: --dhchap-ctrl-secret DHHC-1:02:OTc2YWE2ZjM5ZWZmODU5NThiZDMyMmM4MDY1YTZhZTBjMWM4MTZmYTgzZjM2YzdhdNA7MQ==: 00:14:33.908 14:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:33.908 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:33.908 14:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:33.908 14:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.908 14:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.908 14:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.908 14:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:33.908 14:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:33.908 14:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:34.166 14:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:14:34.166 14:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:34.166 14:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:34.166 14:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:34.166 14:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:34.166 14:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:34.166 14:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:34.166 14:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.166 14:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.166 14:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.166 14:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:34.166 14:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:34.166 14:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:34.424 00:14:34.681 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:34.681 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:34.681 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:34.939 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:34.939 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:34.939 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.939 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.939 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.939 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:34.939 { 00:14:34.939 "cntlid": 29, 00:14:34.939 "qid": 0, 00:14:34.939 "state": "enabled", 00:14:34.939 "thread": "nvmf_tgt_poll_group_000", 00:14:34.939 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:34.939 "listen_address": { 00:14:34.939 "trtype": "TCP", 00:14:34.939 "adrfam": "IPv4", 00:14:34.939 "traddr": "10.0.0.2", 00:14:34.939 "trsvcid": "4420" 00:14:34.939 }, 00:14:34.939 "peer_address": { 00:14:34.939 "trtype": "TCP", 00:14:34.939 "adrfam": "IPv4", 00:14:34.939 "traddr": "10.0.0.1", 00:14:34.939 "trsvcid": "40242" 00:14:34.939 }, 00:14:34.939 "auth": { 00:14:34.939 "state": "completed", 00:14:34.939 "digest": "sha256", 00:14:34.939 "dhgroup": "ffdhe4096" 00:14:34.939 } 00:14:34.939 } 00:14:34.939 ]' 00:14:34.939 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:34.940 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:34.940 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:34.940 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:34.940 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:34.940 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:34.940 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:34.940 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:35.198 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmIyYWUxYjhlNmQxN2I4ZWEyZTFmNWMxZWQ0OWMxMjRlYjIxODY0MzZhZTJjMzRkY7WfSg==: --dhchap-ctrl-secret DHHC-1:01:MWFlNjI2OTk2N2NmZjJkMjczNGNkMTQxODU1M2RjOTS+229+: 00:14:35.198 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YmIyYWUxYjhlNmQxN2I4ZWEyZTFmNWMxZWQ0OWMxMjRlYjIxODY0MzZhZTJjMzRkY7WfSg==: --dhchap-ctrl-secret DHHC-1:01:MWFlNjI2OTk2N2NmZjJkMjczNGNkMTQxODU1M2RjOTS+229+: 00:14:36.130 14:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:36.130 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:36.130 14:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:36.130 14:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.130 14:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.130 14:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.130 14:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:36.130 14:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:36.130 14:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:36.388 14:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:14:36.388 14:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:36.388 14:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:36.388 14:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:36.388 14:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:36.388 14:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:36.388 14:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:36.388 14:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.388 14:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.388 14:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.388 14:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:36.388 14:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:36.388 14:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:36.953 00:14:36.953 14:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:36.953 14:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:36.953 14:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:37.211 14:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:37.211 14:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:37.211 14:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.211 14:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.211 14:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.211 14:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:37.211 { 00:14:37.211 "cntlid": 31, 00:14:37.211 "qid": 0, 00:14:37.211 "state": "enabled", 00:14:37.211 "thread": "nvmf_tgt_poll_group_000", 00:14:37.211 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:37.211 "listen_address": { 00:14:37.211 "trtype": "TCP", 00:14:37.211 "adrfam": "IPv4", 00:14:37.211 "traddr": "10.0.0.2", 00:14:37.211 "trsvcid": "4420" 00:14:37.211 }, 00:14:37.211 "peer_address": { 00:14:37.211 "trtype": "TCP", 00:14:37.211 "adrfam": "IPv4", 00:14:37.211 "traddr": "10.0.0.1", 00:14:37.211 "trsvcid": "40262" 00:14:37.211 }, 00:14:37.211 "auth": { 00:14:37.211 "state": "completed", 00:14:37.211 "digest": "sha256", 00:14:37.211 "dhgroup": "ffdhe4096" 00:14:37.211 } 00:14:37.211 } 00:14:37.211 ]' 00:14:37.211 14:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:37.211 14:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:37.211 14:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:37.211 14:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:37.211 14:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:37.211 14:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:37.211 14:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:37.211 14:51:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:37.470 14:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTIzMzZlOGQzNTY3MmZkMDM2ZmI4NDc3ZWUzYTJmYjc0ZTc1YjA1ODZjYWJjODQ5NmE1MWJhODk5YTk0NzM5Mfa5be0=: 00:14:37.470 14:51:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZTIzMzZlOGQzNTY3MmZkMDM2ZmI4NDc3ZWUzYTJmYjc0ZTc1YjA1ODZjYWJjODQ5NmE1MWJhODk5YTk0NzM5Mfa5be0=: 00:14:38.403 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:38.403 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:38.403 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:38.403 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.403 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.403 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.403 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:38.403 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:38.403 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:38.403 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:38.661 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:14:38.661 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:38.661 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:38.661 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:38.661 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:38.661 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:38.661 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:38.661 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.661 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.661 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.661 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:38.661 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:38.661 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:39.226 00:14:39.227 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:39.227 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:39.227 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:39.498 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:39.498 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:39.498 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.498 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.498 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.498 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:39.499 { 00:14:39.499 "cntlid": 33, 00:14:39.499 "qid": 0, 00:14:39.499 "state": "enabled", 00:14:39.499 "thread": "nvmf_tgt_poll_group_000", 00:14:39.499 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:39.499 "listen_address": { 00:14:39.499 "trtype": "TCP", 00:14:39.499 "adrfam": "IPv4", 00:14:39.499 "traddr": "10.0.0.2", 00:14:39.499 "trsvcid": "4420" 00:14:39.499 }, 00:14:39.499 "peer_address": { 00:14:39.499 "trtype": "TCP", 00:14:39.499 "adrfam": "IPv4", 00:14:39.499 "traddr": "10.0.0.1", 00:14:39.499 "trsvcid": "40276" 00:14:39.499 }, 00:14:39.499 "auth": { 00:14:39.499 "state": "completed", 00:14:39.499 "digest": "sha256", 00:14:39.499 "dhgroup": "ffdhe6144" 00:14:39.499 } 00:14:39.499 } 00:14:39.499 ]' 00:14:39.499 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:39.499 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:39.499 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:39.757 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:39.757 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:39.757 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:39.757 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:39.757 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:40.014 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDhmZTgyMWRkNDFjNTJhNjYwZTI3YmE5MzNhZmI3ZDJjZmYyNmQ5ODg4Yzk1MmNjJ1aojg==: --dhchap-ctrl-secret DHHC-1:03:MGEwYjdkZWNjY2Y1YmJkYWY2ZGMzZGI1MTZmYmU5YTMyZGRhNTdkNmU5ZjNjZTRhMDU5NjRlMjY5YWY3Y2QxZit99ZI=: 00:14:40.014 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZDhmZTgyMWRkNDFjNTJhNjYwZTI3YmE5MzNhZmI3ZDJjZmYyNmQ5ODg4Yzk1MmNjJ1aojg==: --dhchap-ctrl-secret DHHC-1:03:MGEwYjdkZWNjY2Y1YmJkYWY2ZGMzZGI1MTZmYmU5YTMyZGRhNTdkNmU5ZjNjZTRhMDU5NjRlMjY5YWY3Y2QxZit99ZI=: 00:14:40.950 14:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:40.950 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:40.950 14:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:40.950 14:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.950 14:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.950 14:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.950 14:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:40.950 14:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:40.950 14:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:41.208 14:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:14:41.208 14:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:41.208 14:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:41.208 14:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:41.208 14:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:41.208 14:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:41.208 14:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:41.208 14:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.208 14:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.208 14:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.208 14:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:41.208 14:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:41.208 14:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:41.774 00:14:41.774 14:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:41.774 14:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:41.774 14:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:42.032 14:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:42.032 14:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:42.032 14:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.032 14:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.032 14:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.032 14:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:42.032 { 00:14:42.032 "cntlid": 35, 00:14:42.032 "qid": 0, 00:14:42.032 "state": "enabled", 00:14:42.032 "thread": "nvmf_tgt_poll_group_000", 00:14:42.032 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:42.032 "listen_address": { 00:14:42.032 "trtype": "TCP", 00:14:42.032 "adrfam": "IPv4", 00:14:42.032 "traddr": "10.0.0.2", 00:14:42.032 "trsvcid": "4420" 00:14:42.032 }, 00:14:42.032 "peer_address": { 00:14:42.032 "trtype": "TCP", 00:14:42.032 "adrfam": "IPv4", 00:14:42.032 "traddr": "10.0.0.1", 00:14:42.032 "trsvcid": "50416" 00:14:42.032 }, 00:14:42.032 "auth": { 00:14:42.032 "state": "completed", 00:14:42.032 "digest": "sha256", 00:14:42.032 "dhgroup": "ffdhe6144" 00:14:42.032 } 00:14:42.032 } 00:14:42.032 ]' 00:14:42.032 14:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:42.032 14:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:42.032 14:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:42.032 14:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:42.032 14:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:42.032 14:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:42.032 14:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:42.032 14:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:42.290 14:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmFjZjI3MmM1MWEyYTRmNjU4YWExYThlN2Y4ZTg5OTTLxYpf: --dhchap-ctrl-secret DHHC-1:02:OTc2YWE2ZjM5ZWZmODU5NThiZDMyMmM4MDY1YTZhZTBjMWM4MTZmYTgzZjM2YzdhdNA7MQ==: 00:14:42.290 14:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YmFjZjI3MmM1MWEyYTRmNjU4YWExYThlN2Y4ZTg5OTTLxYpf: --dhchap-ctrl-secret DHHC-1:02:OTc2YWE2ZjM5ZWZmODU5NThiZDMyMmM4MDY1YTZhZTBjMWM4MTZmYTgzZjM2YzdhdNA7MQ==: 00:14:43.220 14:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:43.220 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:43.220 14:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:43.220 14:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.221 14:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.221 14:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.221 14:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:43.221 14:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:43.221 14:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:43.785 14:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:14:43.785 14:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:43.785 14:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:43.786 14:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:43.786 14:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:43.786 14:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:43.786 14:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:43.786 14:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.786 14:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.786 14:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.786 14:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:43.786 14:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:43.786 14:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:44.043 00:14:44.301 14:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:44.301 14:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:44.301 14:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:44.559 14:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:44.559 14:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:44.559 14:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.559 14:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.559 14:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.559 14:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:44.559 { 00:14:44.559 "cntlid": 37, 00:14:44.559 "qid": 0, 00:14:44.559 "state": "enabled", 00:14:44.559 "thread": "nvmf_tgt_poll_group_000", 00:14:44.559 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:44.559 "listen_address": { 00:14:44.559 "trtype": "TCP", 00:14:44.559 "adrfam": "IPv4", 00:14:44.560 "traddr": "10.0.0.2", 00:14:44.560 "trsvcid": "4420" 00:14:44.560 }, 00:14:44.560 "peer_address": { 00:14:44.560 "trtype": "TCP", 00:14:44.560 "adrfam": "IPv4", 00:14:44.560 "traddr": "10.0.0.1", 00:14:44.560 "trsvcid": "50438" 00:14:44.560 }, 00:14:44.560 "auth": { 00:14:44.560 "state": "completed", 00:14:44.560 "digest": "sha256", 00:14:44.560 "dhgroup": "ffdhe6144" 00:14:44.560 } 00:14:44.560 } 00:14:44.560 ]' 00:14:44.560 14:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:44.560 14:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:44.560 14:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:44.560 14:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:44.560 14:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:44.560 14:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:44.560 14:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:44.560 14:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:44.818 14:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmIyYWUxYjhlNmQxN2I4ZWEyZTFmNWMxZWQ0OWMxMjRlYjIxODY0MzZhZTJjMzRkY7WfSg==: --dhchap-ctrl-secret DHHC-1:01:MWFlNjI2OTk2N2NmZjJkMjczNGNkMTQxODU1M2RjOTS+229+: 00:14:44.818 14:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YmIyYWUxYjhlNmQxN2I4ZWEyZTFmNWMxZWQ0OWMxMjRlYjIxODY0MzZhZTJjMzRkY7WfSg==: --dhchap-ctrl-secret DHHC-1:01:MWFlNjI2OTk2N2NmZjJkMjczNGNkMTQxODU1M2RjOTS+229+: 00:14:45.751 14:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:45.751 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:45.751 14:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:45.751 14:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.751 14:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.751 14:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.751 14:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:45.751 14:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:45.751 14:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:46.008 14:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:14:46.008 14:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:46.008 14:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:46.009 14:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:46.009 14:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:46.009 14:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:46.009 14:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:46.009 14:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.009 14:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.009 14:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.009 14:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:46.009 14:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:46.009 14:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:46.574 00:14:46.574 14:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:46.574 14:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:46.574 14:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:46.832 14:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:46.832 14:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:46.832 14:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.832 14:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.832 14:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.832 14:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:46.832 { 00:14:46.832 "cntlid": 39, 00:14:46.832 "qid": 0, 00:14:46.832 "state": "enabled", 00:14:46.832 "thread": "nvmf_tgt_poll_group_000", 00:14:46.832 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:46.832 "listen_address": { 00:14:46.832 "trtype": "TCP", 00:14:46.832 "adrfam": "IPv4", 00:14:46.832 "traddr": "10.0.0.2", 00:14:46.832 "trsvcid": "4420" 00:14:46.832 }, 00:14:46.832 "peer_address": { 00:14:46.832 "trtype": "TCP", 00:14:46.833 "adrfam": "IPv4", 00:14:46.833 "traddr": "10.0.0.1", 00:14:46.833 "trsvcid": "50472" 00:14:46.833 }, 00:14:46.833 "auth": { 00:14:46.833 "state": "completed", 00:14:46.833 "digest": "sha256", 00:14:46.833 "dhgroup": "ffdhe6144" 00:14:46.833 } 00:14:46.833 } 00:14:46.833 ]' 00:14:46.833 14:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:46.833 14:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:46.833 14:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:46.833 14:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:46.833 14:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:47.090 14:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:47.090 14:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:47.090 14:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:47.349 14:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTIzMzZlOGQzNTY3MmZkMDM2ZmI4NDc3ZWUzYTJmYjc0ZTc1YjA1ODZjYWJjODQ5NmE1MWJhODk5YTk0NzM5Mfa5be0=: 00:14:47.349 14:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZTIzMzZlOGQzNTY3MmZkMDM2ZmI4NDc3ZWUzYTJmYjc0ZTc1YjA1ODZjYWJjODQ5NmE1MWJhODk5YTk0NzM5Mfa5be0=: 00:14:48.282 14:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:48.282 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:48.282 14:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:48.282 14:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.283 14:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.283 14:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.283 14:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:48.283 14:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:48.283 14:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:48.283 14:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:48.541 14:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:14:48.541 14:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:48.541 14:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:48.541 14:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:48.541 14:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:48.541 14:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:48.541 14:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:48.541 14:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.541 14:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.541 14:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.541 14:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:48.541 14:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:48.541 14:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:49.473 00:14:49.473 14:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:49.473 14:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:49.473 14:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:49.731 14:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:49.731 14:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:49.731 14:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.731 14:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.731 14:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.731 14:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:49.731 { 00:14:49.731 "cntlid": 41, 00:14:49.731 "qid": 0, 00:14:49.731 "state": "enabled", 00:14:49.731 "thread": "nvmf_tgt_poll_group_000", 00:14:49.731 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:49.731 "listen_address": { 00:14:49.731 "trtype": "TCP", 00:14:49.731 "adrfam": "IPv4", 00:14:49.731 "traddr": "10.0.0.2", 00:14:49.731 "trsvcid": "4420" 00:14:49.731 }, 00:14:49.731 "peer_address": { 00:14:49.731 "trtype": "TCP", 00:14:49.731 "adrfam": "IPv4", 00:14:49.731 "traddr": "10.0.0.1", 00:14:49.731 "trsvcid": "50500" 00:14:49.731 }, 00:14:49.731 "auth": { 00:14:49.731 "state": "completed", 00:14:49.731 "digest": "sha256", 00:14:49.731 "dhgroup": "ffdhe8192" 00:14:49.731 } 00:14:49.731 } 00:14:49.731 ]' 00:14:49.731 14:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:49.731 14:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:49.731 14:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:49.731 14:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:49.731 14:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:49.731 14:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:49.731 14:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:49.731 14:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:49.989 14:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDhmZTgyMWRkNDFjNTJhNjYwZTI3YmE5MzNhZmI3ZDJjZmYyNmQ5ODg4Yzk1MmNjJ1aojg==: --dhchap-ctrl-secret DHHC-1:03:MGEwYjdkZWNjY2Y1YmJkYWY2ZGMzZGI1MTZmYmU5YTMyZGRhNTdkNmU5ZjNjZTRhMDU5NjRlMjY5YWY3Y2QxZit99ZI=: 00:14:49.989 14:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZDhmZTgyMWRkNDFjNTJhNjYwZTI3YmE5MzNhZmI3ZDJjZmYyNmQ5ODg4Yzk1MmNjJ1aojg==: --dhchap-ctrl-secret DHHC-1:03:MGEwYjdkZWNjY2Y1YmJkYWY2ZGMzZGI1MTZmYmU5YTMyZGRhNTdkNmU5ZjNjZTRhMDU5NjRlMjY5YWY3Y2QxZit99ZI=: 00:14:50.921 14:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:50.922 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:50.922 14:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:50.922 14:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.922 14:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.922 14:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.922 14:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:50.922 14:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:50.922 14:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:51.180 14:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:14:51.180 14:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:51.180 14:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:51.180 14:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:51.180 14:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:51.180 14:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:51.180 14:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:51.180 14:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.180 14:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.180 14:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.180 14:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:51.180 14:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:51.180 14:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:52.113 00:14:52.113 14:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:52.113 14:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:52.113 14:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:52.371 14:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:52.371 14:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:52.371 14:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.371 14:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.371 14:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.371 14:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:52.371 { 00:14:52.371 "cntlid": 43, 00:14:52.371 "qid": 0, 00:14:52.371 "state": "enabled", 00:14:52.371 "thread": "nvmf_tgt_poll_group_000", 00:14:52.371 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:52.371 "listen_address": { 00:14:52.371 "trtype": "TCP", 00:14:52.371 "adrfam": "IPv4", 00:14:52.371 "traddr": "10.0.0.2", 00:14:52.371 "trsvcid": "4420" 00:14:52.371 }, 00:14:52.371 "peer_address": { 00:14:52.371 "trtype": "TCP", 00:14:52.371 "adrfam": "IPv4", 00:14:52.371 "traddr": "10.0.0.1", 00:14:52.371 "trsvcid": "49172" 00:14:52.371 }, 00:14:52.371 "auth": { 00:14:52.371 "state": "completed", 00:14:52.371 "digest": "sha256", 00:14:52.371 "dhgroup": "ffdhe8192" 00:14:52.371 } 00:14:52.371 } 00:14:52.371 ]' 00:14:52.371 14:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:52.371 14:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:52.371 14:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:52.371 14:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:52.371 14:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:52.628 14:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:52.628 14:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:52.628 14:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:52.885 14:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmFjZjI3MmM1MWEyYTRmNjU4YWExYThlN2Y4ZTg5OTTLxYpf: --dhchap-ctrl-secret DHHC-1:02:OTc2YWE2ZjM5ZWZmODU5NThiZDMyMmM4MDY1YTZhZTBjMWM4MTZmYTgzZjM2YzdhdNA7MQ==: 00:14:52.885 14:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YmFjZjI3MmM1MWEyYTRmNjU4YWExYThlN2Y4ZTg5OTTLxYpf: --dhchap-ctrl-secret DHHC-1:02:OTc2YWE2ZjM5ZWZmODU5NThiZDMyMmM4MDY1YTZhZTBjMWM4MTZmYTgzZjM2YzdhdNA7MQ==: 00:14:53.816 14:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:53.816 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:53.817 14:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:53.817 14:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.817 14:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.817 14:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.817 14:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:53.817 14:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:53.817 14:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:54.074 14:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:14:54.074 14:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:54.074 14:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:54.074 14:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:54.074 14:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:54.074 14:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:54.074 14:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:54.074 14:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.074 14:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.074 14:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.074 14:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:54.074 14:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:54.074 14:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:55.006 00:14:55.006 14:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:55.006 14:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:55.006 14:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:55.006 14:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:55.006 14:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:55.006 14:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.006 14:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.006 14:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.006 14:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:55.006 { 00:14:55.006 "cntlid": 45, 00:14:55.006 "qid": 0, 00:14:55.006 "state": "enabled", 00:14:55.006 "thread": "nvmf_tgt_poll_group_000", 00:14:55.006 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:55.006 "listen_address": { 00:14:55.006 "trtype": "TCP", 00:14:55.006 "adrfam": "IPv4", 00:14:55.006 "traddr": "10.0.0.2", 00:14:55.006 "trsvcid": "4420" 00:14:55.006 }, 00:14:55.006 "peer_address": { 00:14:55.006 "trtype": "TCP", 00:14:55.006 "adrfam": "IPv4", 00:14:55.006 "traddr": "10.0.0.1", 00:14:55.006 "trsvcid": "49202" 00:14:55.006 }, 00:14:55.006 "auth": { 00:14:55.006 "state": "completed", 00:14:55.006 "digest": "sha256", 00:14:55.006 "dhgroup": "ffdhe8192" 00:14:55.006 } 00:14:55.006 } 00:14:55.006 ]' 00:14:55.006 14:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:55.006 14:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:55.006 14:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:55.264 14:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:55.264 14:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:55.264 14:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:55.264 14:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:55.264 14:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:55.521 14:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmIyYWUxYjhlNmQxN2I4ZWEyZTFmNWMxZWQ0OWMxMjRlYjIxODY0MzZhZTJjMzRkY7WfSg==: --dhchap-ctrl-secret DHHC-1:01:MWFlNjI2OTk2N2NmZjJkMjczNGNkMTQxODU1M2RjOTS+229+: 00:14:55.521 14:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YmIyYWUxYjhlNmQxN2I4ZWEyZTFmNWMxZWQ0OWMxMjRlYjIxODY0MzZhZTJjMzRkY7WfSg==: --dhchap-ctrl-secret DHHC-1:01:MWFlNjI2OTk2N2NmZjJkMjczNGNkMTQxODU1M2RjOTS+229+: 00:14:56.454 14:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:56.454 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:56.454 14:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:56.454 14:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.454 14:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.454 14:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.454 14:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:56.455 14:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:56.455 14:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:56.712 14:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:14:56.712 14:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:56.712 14:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:56.712 14:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:56.712 14:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:56.712 14:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:56.712 14:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:56.712 14:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.712 14:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.712 14:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.712 14:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:56.712 14:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:56.712 14:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:57.646 00:14:57.646 14:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:57.646 14:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:57.646 14:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:57.646 14:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:57.646 14:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:57.646 14:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.646 14:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.646 14:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.646 14:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:57.646 { 00:14:57.646 "cntlid": 47, 00:14:57.646 "qid": 0, 00:14:57.646 "state": "enabled", 00:14:57.646 "thread": "nvmf_tgt_poll_group_000", 00:14:57.646 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:57.646 "listen_address": { 00:14:57.646 "trtype": "TCP", 00:14:57.646 "adrfam": "IPv4", 00:14:57.646 "traddr": "10.0.0.2", 00:14:57.646 "trsvcid": "4420" 00:14:57.646 }, 00:14:57.646 "peer_address": { 00:14:57.646 "trtype": "TCP", 00:14:57.646 "adrfam": "IPv4", 00:14:57.646 "traddr": "10.0.0.1", 00:14:57.646 "trsvcid": "49240" 00:14:57.646 }, 00:14:57.646 "auth": { 00:14:57.646 "state": "completed", 00:14:57.646 "digest": "sha256", 00:14:57.646 "dhgroup": "ffdhe8192" 00:14:57.646 } 00:14:57.646 } 00:14:57.646 ]' 00:14:57.646 14:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:57.904 14:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:57.904 14:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:57.904 14:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:57.904 14:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:57.904 14:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:57.904 14:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:57.904 14:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:58.161 14:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTIzMzZlOGQzNTY3MmZkMDM2ZmI4NDc3ZWUzYTJmYjc0ZTc1YjA1ODZjYWJjODQ5NmE1MWJhODk5YTk0NzM5Mfa5be0=: 00:14:58.161 14:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZTIzMzZlOGQzNTY3MmZkMDM2ZmI4NDc3ZWUzYTJmYjc0ZTc1YjA1ODZjYWJjODQ5NmE1MWJhODk5YTk0NzM5Mfa5be0=: 00:14:59.093 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:59.093 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:59.093 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:59.093 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.093 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.093 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.093 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:14:59.093 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:59.093 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:59.093 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:59.093 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:59.351 14:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:14:59.351 14:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:59.351 14:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:59.351 14:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:59.351 14:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:59.351 14:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:59.351 14:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:59.351 14:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.351 14:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.351 14:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.351 14:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:59.351 14:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:59.351 14:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:59.916 00:14:59.916 14:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:59.916 14:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:59.916 14:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:59.916 14:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:59.916 14:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:59.916 14:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.916 14:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.916 14:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.916 14:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:59.916 { 00:14:59.916 "cntlid": 49, 00:14:59.916 "qid": 0, 00:14:59.916 "state": "enabled", 00:14:59.916 "thread": "nvmf_tgt_poll_group_000", 00:14:59.916 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:59.916 "listen_address": { 00:14:59.916 "trtype": "TCP", 00:14:59.916 "adrfam": "IPv4", 00:14:59.916 "traddr": "10.0.0.2", 00:14:59.916 "trsvcid": "4420" 00:14:59.916 }, 00:14:59.916 "peer_address": { 00:14:59.916 "trtype": "TCP", 00:14:59.916 "adrfam": "IPv4", 00:14:59.916 "traddr": "10.0.0.1", 00:14:59.916 "trsvcid": "55358" 00:14:59.916 }, 00:14:59.916 "auth": { 00:14:59.916 "state": "completed", 00:14:59.916 "digest": "sha384", 00:14:59.916 "dhgroup": "null" 00:14:59.916 } 00:14:59.916 } 00:14:59.917 ]' 00:14:59.917 14:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:00.174 14:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:00.174 14:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:00.174 14:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:00.174 14:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:00.174 14:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:00.174 14:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:00.174 14:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:00.432 14:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDhmZTgyMWRkNDFjNTJhNjYwZTI3YmE5MzNhZmI3ZDJjZmYyNmQ5ODg4Yzk1MmNjJ1aojg==: --dhchap-ctrl-secret DHHC-1:03:MGEwYjdkZWNjY2Y1YmJkYWY2ZGMzZGI1MTZmYmU5YTMyZGRhNTdkNmU5ZjNjZTRhMDU5NjRlMjY5YWY3Y2QxZit99ZI=: 00:15:00.432 14:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZDhmZTgyMWRkNDFjNTJhNjYwZTI3YmE5MzNhZmI3ZDJjZmYyNmQ5ODg4Yzk1MmNjJ1aojg==: --dhchap-ctrl-secret DHHC-1:03:MGEwYjdkZWNjY2Y1YmJkYWY2ZGMzZGI1MTZmYmU5YTMyZGRhNTdkNmU5ZjNjZTRhMDU5NjRlMjY5YWY3Y2QxZit99ZI=: 00:15:01.365 14:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:01.365 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:01.365 14:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:01.365 14:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.365 14:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.365 14:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.365 14:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:01.365 14:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:01.365 14:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:01.623 14:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:15:01.623 14:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:01.623 14:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:01.623 14:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:01.623 14:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:01.623 14:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:01.623 14:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:01.623 14:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.623 14:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.623 14:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.623 14:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:01.623 14:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:01.623 14:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:02.230 00:15:02.230 14:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:02.230 14:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:02.230 14:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:02.230 14:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:02.230 14:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:02.230 14:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.230 14:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.230 14:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.230 14:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:02.230 { 00:15:02.230 "cntlid": 51, 00:15:02.230 "qid": 0, 00:15:02.230 "state": "enabled", 00:15:02.230 "thread": "nvmf_tgt_poll_group_000", 00:15:02.230 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:02.230 "listen_address": { 00:15:02.230 "trtype": "TCP", 00:15:02.230 "adrfam": "IPv4", 00:15:02.230 "traddr": "10.0.0.2", 00:15:02.230 "trsvcid": "4420" 00:15:02.230 }, 00:15:02.230 "peer_address": { 00:15:02.230 "trtype": "TCP", 00:15:02.230 "adrfam": "IPv4", 00:15:02.230 "traddr": "10.0.0.1", 00:15:02.230 "trsvcid": "55398" 00:15:02.230 }, 00:15:02.230 "auth": { 00:15:02.230 "state": "completed", 00:15:02.230 "digest": "sha384", 00:15:02.230 "dhgroup": "null" 00:15:02.230 } 00:15:02.230 } 00:15:02.230 ]' 00:15:02.230 14:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:02.538 14:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:02.538 14:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:02.538 14:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:02.538 14:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:02.538 14:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:02.538 14:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:02.538 14:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:02.795 14:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmFjZjI3MmM1MWEyYTRmNjU4YWExYThlN2Y4ZTg5OTTLxYpf: --dhchap-ctrl-secret DHHC-1:02:OTc2YWE2ZjM5ZWZmODU5NThiZDMyMmM4MDY1YTZhZTBjMWM4MTZmYTgzZjM2YzdhdNA7MQ==: 00:15:02.795 14:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YmFjZjI3MmM1MWEyYTRmNjU4YWExYThlN2Y4ZTg5OTTLxYpf: --dhchap-ctrl-secret DHHC-1:02:OTc2YWE2ZjM5ZWZmODU5NThiZDMyMmM4MDY1YTZhZTBjMWM4MTZmYTgzZjM2YzdhdNA7MQ==: 00:15:03.728 14:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:03.728 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:03.728 14:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:03.728 14:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.728 14:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.728 14:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.728 14:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:03.728 14:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:03.728 14:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:03.986 14:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:15:03.986 14:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:03.986 14:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:03.986 14:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:03.986 14:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:03.986 14:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:03.986 14:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:03.986 14:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.986 14:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.986 14:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.986 14:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:03.986 14:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:03.986 14:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:04.244 00:15:04.244 14:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:04.244 14:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:04.244 14:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:04.501 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:04.501 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:04.501 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.502 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.502 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.502 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:04.502 { 00:15:04.502 "cntlid": 53, 00:15:04.502 "qid": 0, 00:15:04.502 "state": "enabled", 00:15:04.502 "thread": "nvmf_tgt_poll_group_000", 00:15:04.502 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:04.502 "listen_address": { 00:15:04.502 "trtype": "TCP", 00:15:04.502 "adrfam": "IPv4", 00:15:04.502 "traddr": "10.0.0.2", 00:15:04.502 "trsvcid": "4420" 00:15:04.502 }, 00:15:04.502 "peer_address": { 00:15:04.502 "trtype": "TCP", 00:15:04.502 "adrfam": "IPv4", 00:15:04.502 "traddr": "10.0.0.1", 00:15:04.502 "trsvcid": "55434" 00:15:04.502 }, 00:15:04.502 "auth": { 00:15:04.502 "state": "completed", 00:15:04.502 "digest": "sha384", 00:15:04.502 "dhgroup": "null" 00:15:04.502 } 00:15:04.502 } 00:15:04.502 ]' 00:15:04.502 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:04.502 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:04.502 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:04.759 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:04.759 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:04.759 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:04.759 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:04.759 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:05.017 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmIyYWUxYjhlNmQxN2I4ZWEyZTFmNWMxZWQ0OWMxMjRlYjIxODY0MzZhZTJjMzRkY7WfSg==: --dhchap-ctrl-secret DHHC-1:01:MWFlNjI2OTk2N2NmZjJkMjczNGNkMTQxODU1M2RjOTS+229+: 00:15:05.017 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YmIyYWUxYjhlNmQxN2I4ZWEyZTFmNWMxZWQ0OWMxMjRlYjIxODY0MzZhZTJjMzRkY7WfSg==: --dhchap-ctrl-secret DHHC-1:01:MWFlNjI2OTk2N2NmZjJkMjczNGNkMTQxODU1M2RjOTS+229+: 00:15:05.948 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:05.948 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:05.948 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:05.948 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.948 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.948 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.948 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:05.948 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:05.948 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:06.206 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:15:06.206 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:06.206 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:06.206 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:06.206 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:06.206 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:06.206 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:06.206 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.206 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.206 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.206 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:06.206 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:06.206 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:06.463 00:15:06.463 14:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:06.463 14:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:06.463 14:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:06.721 14:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:06.721 14:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:06.721 14:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.721 14:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.979 14:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.979 14:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:06.979 { 00:15:06.979 "cntlid": 55, 00:15:06.979 "qid": 0, 00:15:06.979 "state": "enabled", 00:15:06.979 "thread": "nvmf_tgt_poll_group_000", 00:15:06.979 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:06.979 "listen_address": { 00:15:06.979 "trtype": "TCP", 00:15:06.979 "adrfam": "IPv4", 00:15:06.979 "traddr": "10.0.0.2", 00:15:06.979 "trsvcid": "4420" 00:15:06.979 }, 00:15:06.979 "peer_address": { 00:15:06.979 "trtype": "TCP", 00:15:06.979 "adrfam": "IPv4", 00:15:06.979 "traddr": "10.0.0.1", 00:15:06.979 "trsvcid": "55468" 00:15:06.979 }, 00:15:06.979 "auth": { 00:15:06.979 "state": "completed", 00:15:06.979 "digest": "sha384", 00:15:06.979 "dhgroup": "null" 00:15:06.979 } 00:15:06.979 } 00:15:06.979 ]' 00:15:06.979 14:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:06.979 14:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:06.979 14:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:06.979 14:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:06.979 14:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:06.979 14:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:06.979 14:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:06.979 14:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:07.237 14:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTIzMzZlOGQzNTY3MmZkMDM2ZmI4NDc3ZWUzYTJmYjc0ZTc1YjA1ODZjYWJjODQ5NmE1MWJhODk5YTk0NzM5Mfa5be0=: 00:15:07.237 14:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZTIzMzZlOGQzNTY3MmZkMDM2ZmI4NDc3ZWUzYTJmYjc0ZTc1YjA1ODZjYWJjODQ5NmE1MWJhODk5YTk0NzM5Mfa5be0=: 00:15:08.171 14:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:08.171 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:08.171 14:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:08.171 14:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.171 14:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.171 14:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.171 14:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:08.171 14:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:08.171 14:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:08.171 14:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:08.429 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:15:08.429 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:08.429 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:08.429 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:08.429 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:08.429 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:08.429 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:08.429 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.429 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.429 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.429 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:08.429 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:08.429 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:08.687 00:15:08.687 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:08.687 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:08.687 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:08.945 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:08.945 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:08.945 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.945 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.945 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.945 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:08.945 { 00:15:08.945 "cntlid": 57, 00:15:08.945 "qid": 0, 00:15:08.945 "state": "enabled", 00:15:08.945 "thread": "nvmf_tgt_poll_group_000", 00:15:08.945 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:08.945 "listen_address": { 00:15:08.945 "trtype": "TCP", 00:15:08.945 "adrfam": "IPv4", 00:15:08.945 "traddr": "10.0.0.2", 00:15:08.945 "trsvcid": "4420" 00:15:08.945 }, 00:15:08.945 "peer_address": { 00:15:08.945 "trtype": "TCP", 00:15:08.945 "adrfam": "IPv4", 00:15:08.945 "traddr": "10.0.0.1", 00:15:08.945 "trsvcid": "55492" 00:15:08.945 }, 00:15:08.945 "auth": { 00:15:08.945 "state": "completed", 00:15:08.945 "digest": "sha384", 00:15:08.945 "dhgroup": "ffdhe2048" 00:15:08.945 } 00:15:08.945 } 00:15:08.945 ]' 00:15:08.945 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:09.203 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:09.203 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:09.203 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:09.203 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:09.203 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:09.203 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:09.203 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:09.460 14:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDhmZTgyMWRkNDFjNTJhNjYwZTI3YmE5MzNhZmI3ZDJjZmYyNmQ5ODg4Yzk1MmNjJ1aojg==: --dhchap-ctrl-secret DHHC-1:03:MGEwYjdkZWNjY2Y1YmJkYWY2ZGMzZGI1MTZmYmU5YTMyZGRhNTdkNmU5ZjNjZTRhMDU5NjRlMjY5YWY3Y2QxZit99ZI=: 00:15:09.460 14:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZDhmZTgyMWRkNDFjNTJhNjYwZTI3YmE5MzNhZmI3ZDJjZmYyNmQ5ODg4Yzk1MmNjJ1aojg==: --dhchap-ctrl-secret DHHC-1:03:MGEwYjdkZWNjY2Y1YmJkYWY2ZGMzZGI1MTZmYmU5YTMyZGRhNTdkNmU5ZjNjZTRhMDU5NjRlMjY5YWY3Y2QxZit99ZI=: 00:15:10.393 14:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:10.393 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:10.393 14:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:10.393 14:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.393 14:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.393 14:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.393 14:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:10.393 14:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:10.393 14:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:10.651 14:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:15:10.651 14:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:10.651 14:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:10.651 14:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:10.651 14:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:10.651 14:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:10.651 14:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:10.651 14:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.651 14:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.651 14:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.651 14:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:10.651 14:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:10.651 14:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:10.909 00:15:10.909 14:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:10.909 14:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:10.909 14:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:11.474 14:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:11.474 14:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:11.474 14:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.474 14:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.474 14:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.474 14:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:11.474 { 00:15:11.474 "cntlid": 59, 00:15:11.474 "qid": 0, 00:15:11.474 "state": "enabled", 00:15:11.474 "thread": "nvmf_tgt_poll_group_000", 00:15:11.474 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:11.474 "listen_address": { 00:15:11.474 "trtype": "TCP", 00:15:11.474 "adrfam": "IPv4", 00:15:11.474 "traddr": "10.0.0.2", 00:15:11.474 "trsvcid": "4420" 00:15:11.474 }, 00:15:11.474 "peer_address": { 00:15:11.474 "trtype": "TCP", 00:15:11.474 "adrfam": "IPv4", 00:15:11.474 "traddr": "10.0.0.1", 00:15:11.474 "trsvcid": "59100" 00:15:11.474 }, 00:15:11.474 "auth": { 00:15:11.474 "state": "completed", 00:15:11.474 "digest": "sha384", 00:15:11.474 "dhgroup": "ffdhe2048" 00:15:11.474 } 00:15:11.474 } 00:15:11.474 ]' 00:15:11.474 14:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:11.474 14:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:11.474 14:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:11.474 14:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:11.474 14:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:11.474 14:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:11.474 14:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:11.474 14:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:11.732 14:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmFjZjI3MmM1MWEyYTRmNjU4YWExYThlN2Y4ZTg5OTTLxYpf: --dhchap-ctrl-secret DHHC-1:02:OTc2YWE2ZjM5ZWZmODU5NThiZDMyMmM4MDY1YTZhZTBjMWM4MTZmYTgzZjM2YzdhdNA7MQ==: 00:15:11.732 14:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YmFjZjI3MmM1MWEyYTRmNjU4YWExYThlN2Y4ZTg5OTTLxYpf: --dhchap-ctrl-secret DHHC-1:02:OTc2YWE2ZjM5ZWZmODU5NThiZDMyMmM4MDY1YTZhZTBjMWM4MTZmYTgzZjM2YzdhdNA7MQ==: 00:15:12.663 14:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:12.663 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:12.663 14:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:12.663 14:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.663 14:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.663 14:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.663 14:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:12.663 14:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:12.663 14:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:12.921 14:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:15:12.921 14:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:12.921 14:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:12.921 14:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:12.921 14:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:12.921 14:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:12.921 14:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:12.921 14:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.921 14:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.921 14:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.921 14:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:12.921 14:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:12.921 14:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:13.178 00:15:13.178 14:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:13.178 14:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:13.178 14:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:13.437 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:13.437 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:13.437 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.437 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.437 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.437 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:13.437 { 00:15:13.437 "cntlid": 61, 00:15:13.437 "qid": 0, 00:15:13.437 "state": "enabled", 00:15:13.437 "thread": "nvmf_tgt_poll_group_000", 00:15:13.437 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:13.437 "listen_address": { 00:15:13.437 "trtype": "TCP", 00:15:13.437 "adrfam": "IPv4", 00:15:13.437 "traddr": "10.0.0.2", 00:15:13.437 "trsvcid": "4420" 00:15:13.437 }, 00:15:13.437 "peer_address": { 00:15:13.437 "trtype": "TCP", 00:15:13.437 "adrfam": "IPv4", 00:15:13.437 "traddr": "10.0.0.1", 00:15:13.437 "trsvcid": "59122" 00:15:13.437 }, 00:15:13.437 "auth": { 00:15:13.437 "state": "completed", 00:15:13.437 "digest": "sha384", 00:15:13.437 "dhgroup": "ffdhe2048" 00:15:13.437 } 00:15:13.437 } 00:15:13.437 ]' 00:15:13.437 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:13.437 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:13.437 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:13.694 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:13.694 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:13.694 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:13.694 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:13.694 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:13.951 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmIyYWUxYjhlNmQxN2I4ZWEyZTFmNWMxZWQ0OWMxMjRlYjIxODY0MzZhZTJjMzRkY7WfSg==: --dhchap-ctrl-secret DHHC-1:01:MWFlNjI2OTk2N2NmZjJkMjczNGNkMTQxODU1M2RjOTS+229+: 00:15:13.951 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YmIyYWUxYjhlNmQxN2I4ZWEyZTFmNWMxZWQ0OWMxMjRlYjIxODY0MzZhZTJjMzRkY7WfSg==: --dhchap-ctrl-secret DHHC-1:01:MWFlNjI2OTk2N2NmZjJkMjczNGNkMTQxODU1M2RjOTS+229+: 00:15:14.883 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:14.883 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:14.883 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:14.883 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.883 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.883 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.883 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:14.883 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:14.883 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:15.140 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:15:15.140 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:15.140 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:15.141 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:15.141 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:15.141 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:15.141 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:15.141 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.141 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.141 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.141 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:15.141 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:15.141 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:15.398 00:15:15.398 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:15.398 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:15.398 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:15.656 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:15.656 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:15.656 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.656 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.656 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.656 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:15.656 { 00:15:15.656 "cntlid": 63, 00:15:15.656 "qid": 0, 00:15:15.656 "state": "enabled", 00:15:15.656 "thread": "nvmf_tgt_poll_group_000", 00:15:15.656 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:15.656 "listen_address": { 00:15:15.656 "trtype": "TCP", 00:15:15.656 "adrfam": "IPv4", 00:15:15.656 "traddr": "10.0.0.2", 00:15:15.656 "trsvcid": "4420" 00:15:15.656 }, 00:15:15.656 "peer_address": { 00:15:15.656 "trtype": "TCP", 00:15:15.656 "adrfam": "IPv4", 00:15:15.656 "traddr": "10.0.0.1", 00:15:15.656 "trsvcid": "59156" 00:15:15.656 }, 00:15:15.656 "auth": { 00:15:15.656 "state": "completed", 00:15:15.656 "digest": "sha384", 00:15:15.656 "dhgroup": "ffdhe2048" 00:15:15.656 } 00:15:15.656 } 00:15:15.656 ]' 00:15:15.656 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:15.656 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:15.656 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:15.914 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:15.914 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:15.914 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:15.914 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:15.914 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:16.174 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTIzMzZlOGQzNTY3MmZkMDM2ZmI4NDc3ZWUzYTJmYjc0ZTc1YjA1ODZjYWJjODQ5NmE1MWJhODk5YTk0NzM5Mfa5be0=: 00:15:16.174 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZTIzMzZlOGQzNTY3MmZkMDM2ZmI4NDc3ZWUzYTJmYjc0ZTc1YjA1ODZjYWJjODQ5NmE1MWJhODk5YTk0NzM5Mfa5be0=: 00:15:17.106 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:17.106 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:17.107 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:17.107 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.107 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.107 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.107 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:17.107 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:17.107 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:17.107 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:17.365 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:15:17.365 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:17.365 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:17.365 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:17.365 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:17.365 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:17.365 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:17.365 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.365 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.365 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.365 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:17.365 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:17.365 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:17.623 00:15:17.623 14:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:17.623 14:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:17.623 14:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:17.881 14:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:17.881 14:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:17.881 14:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.881 14:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.881 14:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.881 14:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:17.881 { 00:15:17.881 "cntlid": 65, 00:15:17.881 "qid": 0, 00:15:17.881 "state": "enabled", 00:15:17.881 "thread": "nvmf_tgt_poll_group_000", 00:15:17.881 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:17.881 "listen_address": { 00:15:17.881 "trtype": "TCP", 00:15:17.881 "adrfam": "IPv4", 00:15:17.881 "traddr": "10.0.0.2", 00:15:17.881 "trsvcid": "4420" 00:15:17.881 }, 00:15:17.881 "peer_address": { 00:15:17.881 "trtype": "TCP", 00:15:17.881 "adrfam": "IPv4", 00:15:17.881 "traddr": "10.0.0.1", 00:15:17.881 "trsvcid": "59190" 00:15:17.881 }, 00:15:17.881 "auth": { 00:15:17.881 "state": "completed", 00:15:17.881 "digest": "sha384", 00:15:17.881 "dhgroup": "ffdhe3072" 00:15:17.881 } 00:15:17.881 } 00:15:17.881 ]' 00:15:17.881 14:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:17.881 14:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:17.881 14:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:18.138 14:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:18.138 14:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:18.138 14:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:18.138 14:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:18.138 14:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:18.395 14:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDhmZTgyMWRkNDFjNTJhNjYwZTI3YmE5MzNhZmI3ZDJjZmYyNmQ5ODg4Yzk1MmNjJ1aojg==: --dhchap-ctrl-secret DHHC-1:03:MGEwYjdkZWNjY2Y1YmJkYWY2ZGMzZGI1MTZmYmU5YTMyZGRhNTdkNmU5ZjNjZTRhMDU5NjRlMjY5YWY3Y2QxZit99ZI=: 00:15:18.395 14:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZDhmZTgyMWRkNDFjNTJhNjYwZTI3YmE5MzNhZmI3ZDJjZmYyNmQ5ODg4Yzk1MmNjJ1aojg==: --dhchap-ctrl-secret DHHC-1:03:MGEwYjdkZWNjY2Y1YmJkYWY2ZGMzZGI1MTZmYmU5YTMyZGRhNTdkNmU5ZjNjZTRhMDU5NjRlMjY5YWY3Y2QxZit99ZI=: 00:15:19.327 14:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:19.327 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:19.327 14:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:19.327 14:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.327 14:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.327 14:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.328 14:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:19.328 14:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:19.328 14:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:19.585 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:15:19.585 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:19.585 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:19.585 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:19.586 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:19.586 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:19.586 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:19.586 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.586 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.586 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.586 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:19.586 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:19.586 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:19.843 00:15:19.843 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:19.843 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:19.843 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:20.101 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:20.101 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:20.101 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.101 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.101 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.101 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:20.101 { 00:15:20.101 "cntlid": 67, 00:15:20.101 "qid": 0, 00:15:20.101 "state": "enabled", 00:15:20.101 "thread": "nvmf_tgt_poll_group_000", 00:15:20.101 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:20.101 "listen_address": { 00:15:20.101 "trtype": "TCP", 00:15:20.101 "adrfam": "IPv4", 00:15:20.101 "traddr": "10.0.0.2", 00:15:20.101 "trsvcid": "4420" 00:15:20.101 }, 00:15:20.101 "peer_address": { 00:15:20.101 "trtype": "TCP", 00:15:20.101 "adrfam": "IPv4", 00:15:20.101 "traddr": "10.0.0.1", 00:15:20.101 "trsvcid": "37716" 00:15:20.101 }, 00:15:20.101 "auth": { 00:15:20.101 "state": "completed", 00:15:20.101 "digest": "sha384", 00:15:20.101 "dhgroup": "ffdhe3072" 00:15:20.101 } 00:15:20.101 } 00:15:20.101 ]' 00:15:20.101 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:20.101 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:20.101 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:20.101 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:20.101 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:20.358 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:20.359 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:20.359 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:20.616 14:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmFjZjI3MmM1MWEyYTRmNjU4YWExYThlN2Y4ZTg5OTTLxYpf: --dhchap-ctrl-secret DHHC-1:02:OTc2YWE2ZjM5ZWZmODU5NThiZDMyMmM4MDY1YTZhZTBjMWM4MTZmYTgzZjM2YzdhdNA7MQ==: 00:15:20.616 14:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YmFjZjI3MmM1MWEyYTRmNjU4YWExYThlN2Y4ZTg5OTTLxYpf: --dhchap-ctrl-secret DHHC-1:02:OTc2YWE2ZjM5ZWZmODU5NThiZDMyMmM4MDY1YTZhZTBjMWM4MTZmYTgzZjM2YzdhdNA7MQ==: 00:15:21.551 14:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:21.551 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:21.551 14:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:21.551 14:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.551 14:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.551 14:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.551 14:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:21.551 14:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:21.551 14:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:21.551 14:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:15:21.551 14:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:21.551 14:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:21.551 14:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:21.551 14:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:21.551 14:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:21.551 14:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:21.551 14:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.551 14:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.809 14:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.809 14:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:21.809 14:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:21.809 14:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:22.067 00:15:22.067 14:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:22.067 14:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:22.067 14:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:22.325 14:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:22.325 14:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:22.325 14:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.325 14:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.325 14:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.325 14:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:22.325 { 00:15:22.325 "cntlid": 69, 00:15:22.325 "qid": 0, 00:15:22.325 "state": "enabled", 00:15:22.325 "thread": "nvmf_tgt_poll_group_000", 00:15:22.325 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:22.325 "listen_address": { 00:15:22.325 "trtype": "TCP", 00:15:22.325 "adrfam": "IPv4", 00:15:22.325 "traddr": "10.0.0.2", 00:15:22.325 "trsvcid": "4420" 00:15:22.325 }, 00:15:22.325 "peer_address": { 00:15:22.325 "trtype": "TCP", 00:15:22.325 "adrfam": "IPv4", 00:15:22.325 "traddr": "10.0.0.1", 00:15:22.325 "trsvcid": "37742" 00:15:22.325 }, 00:15:22.325 "auth": { 00:15:22.325 "state": "completed", 00:15:22.325 "digest": "sha384", 00:15:22.325 "dhgroup": "ffdhe3072" 00:15:22.325 } 00:15:22.325 } 00:15:22.325 ]' 00:15:22.325 14:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:22.325 14:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:22.325 14:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:22.325 14:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:22.325 14:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:22.325 14:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:22.325 14:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:22.325 14:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:22.582 14:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmIyYWUxYjhlNmQxN2I4ZWEyZTFmNWMxZWQ0OWMxMjRlYjIxODY0MzZhZTJjMzRkY7WfSg==: --dhchap-ctrl-secret DHHC-1:01:MWFlNjI2OTk2N2NmZjJkMjczNGNkMTQxODU1M2RjOTS+229+: 00:15:22.582 14:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YmIyYWUxYjhlNmQxN2I4ZWEyZTFmNWMxZWQ0OWMxMjRlYjIxODY0MzZhZTJjMzRkY7WfSg==: --dhchap-ctrl-secret DHHC-1:01:MWFlNjI2OTk2N2NmZjJkMjczNGNkMTQxODU1M2RjOTS+229+: 00:15:23.514 14:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:23.514 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:23.514 14:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:23.514 14:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.514 14:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.514 14:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.514 14:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:23.514 14:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:23.514 14:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:23.772 14:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:15:23.772 14:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:23.772 14:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:23.772 14:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:23.772 14:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:23.772 14:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:23.772 14:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:23.772 14:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.772 14:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.030 14:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.030 14:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:24.030 14:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:24.030 14:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:24.288 00:15:24.288 14:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:24.288 14:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:24.288 14:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:24.546 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:24.546 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:24.546 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.546 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.546 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.546 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:24.546 { 00:15:24.546 "cntlid": 71, 00:15:24.546 "qid": 0, 00:15:24.546 "state": "enabled", 00:15:24.546 "thread": "nvmf_tgt_poll_group_000", 00:15:24.546 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:24.546 "listen_address": { 00:15:24.546 "trtype": "TCP", 00:15:24.546 "adrfam": "IPv4", 00:15:24.546 "traddr": "10.0.0.2", 00:15:24.546 "trsvcid": "4420" 00:15:24.546 }, 00:15:24.546 "peer_address": { 00:15:24.546 "trtype": "TCP", 00:15:24.546 "adrfam": "IPv4", 00:15:24.546 "traddr": "10.0.0.1", 00:15:24.546 "trsvcid": "37758" 00:15:24.546 }, 00:15:24.546 "auth": { 00:15:24.546 "state": "completed", 00:15:24.546 "digest": "sha384", 00:15:24.546 "dhgroup": "ffdhe3072" 00:15:24.546 } 00:15:24.546 } 00:15:24.546 ]' 00:15:24.546 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:24.546 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:24.546 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:24.546 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:24.546 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:24.546 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:24.546 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:24.546 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:25.111 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTIzMzZlOGQzNTY3MmZkMDM2ZmI4NDc3ZWUzYTJmYjc0ZTc1YjA1ODZjYWJjODQ5NmE1MWJhODk5YTk0NzM5Mfa5be0=: 00:15:25.111 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZTIzMzZlOGQzNTY3MmZkMDM2ZmI4NDc3ZWUzYTJmYjc0ZTc1YjA1ODZjYWJjODQ5NmE1MWJhODk5YTk0NzM5Mfa5be0=: 00:15:25.677 14:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:25.677 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:25.934 14:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:25.934 14:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.934 14:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.934 14:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.934 14:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:25.934 14:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:25.934 14:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:25.934 14:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:26.192 14:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:15:26.192 14:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:26.192 14:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:26.192 14:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:26.192 14:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:26.192 14:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:26.192 14:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:26.192 14:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.192 14:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.192 14:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.192 14:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:26.192 14:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:26.192 14:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:26.450 00:15:26.450 14:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:26.450 14:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:26.450 14:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:26.707 14:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:26.707 14:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:26.707 14:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.707 14:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.707 14:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.707 14:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:26.707 { 00:15:26.707 "cntlid": 73, 00:15:26.707 "qid": 0, 00:15:26.707 "state": "enabled", 00:15:26.707 "thread": "nvmf_tgt_poll_group_000", 00:15:26.707 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:26.707 "listen_address": { 00:15:26.707 "trtype": "TCP", 00:15:26.707 "adrfam": "IPv4", 00:15:26.707 "traddr": "10.0.0.2", 00:15:26.707 "trsvcid": "4420" 00:15:26.707 }, 00:15:26.707 "peer_address": { 00:15:26.707 "trtype": "TCP", 00:15:26.707 "adrfam": "IPv4", 00:15:26.707 "traddr": "10.0.0.1", 00:15:26.707 "trsvcid": "37782" 00:15:26.707 }, 00:15:26.707 "auth": { 00:15:26.707 "state": "completed", 00:15:26.707 "digest": "sha384", 00:15:26.707 "dhgroup": "ffdhe4096" 00:15:26.707 } 00:15:26.707 } 00:15:26.707 ]' 00:15:26.707 14:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:26.707 14:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:26.707 14:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:26.966 14:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:26.966 14:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:26.966 14:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:26.966 14:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:26.966 14:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:27.224 14:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDhmZTgyMWRkNDFjNTJhNjYwZTI3YmE5MzNhZmI3ZDJjZmYyNmQ5ODg4Yzk1MmNjJ1aojg==: --dhchap-ctrl-secret DHHC-1:03:MGEwYjdkZWNjY2Y1YmJkYWY2ZGMzZGI1MTZmYmU5YTMyZGRhNTdkNmU5ZjNjZTRhMDU5NjRlMjY5YWY3Y2QxZit99ZI=: 00:15:27.224 14:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZDhmZTgyMWRkNDFjNTJhNjYwZTI3YmE5MzNhZmI3ZDJjZmYyNmQ5ODg4Yzk1MmNjJ1aojg==: --dhchap-ctrl-secret DHHC-1:03:MGEwYjdkZWNjY2Y1YmJkYWY2ZGMzZGI1MTZmYmU5YTMyZGRhNTdkNmU5ZjNjZTRhMDU5NjRlMjY5YWY3Y2QxZit99ZI=: 00:15:28.157 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:28.157 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:28.157 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:28.157 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.157 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.157 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.157 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:28.157 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:28.157 14:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:28.415 14:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:15:28.415 14:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:28.415 14:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:28.415 14:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:28.415 14:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:28.415 14:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:28.415 14:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:28.415 14:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.415 14:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.415 14:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.415 14:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:28.415 14:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:28.415 14:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:28.673 00:15:28.673 14:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:28.673 14:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:28.673 14:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:28.931 14:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:28.931 14:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:28.931 14:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.931 14:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.931 14:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.931 14:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:28.931 { 00:15:28.931 "cntlid": 75, 00:15:28.931 "qid": 0, 00:15:28.931 "state": "enabled", 00:15:28.931 "thread": "nvmf_tgt_poll_group_000", 00:15:28.931 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:28.931 "listen_address": { 00:15:28.931 "trtype": "TCP", 00:15:28.931 "adrfam": "IPv4", 00:15:28.931 "traddr": "10.0.0.2", 00:15:28.931 "trsvcid": "4420" 00:15:28.931 }, 00:15:28.931 "peer_address": { 00:15:28.931 "trtype": "TCP", 00:15:28.931 "adrfam": "IPv4", 00:15:28.931 "traddr": "10.0.0.1", 00:15:28.931 "trsvcid": "37820" 00:15:28.931 }, 00:15:28.931 "auth": { 00:15:28.931 "state": "completed", 00:15:28.931 "digest": "sha384", 00:15:28.931 "dhgroup": "ffdhe4096" 00:15:28.931 } 00:15:28.931 } 00:15:28.931 ]' 00:15:28.931 14:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:29.189 14:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:29.189 14:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:29.189 14:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:29.189 14:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:29.189 14:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:29.189 14:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:29.189 14:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:29.446 14:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmFjZjI3MmM1MWEyYTRmNjU4YWExYThlN2Y4ZTg5OTTLxYpf: --dhchap-ctrl-secret DHHC-1:02:OTc2YWE2ZjM5ZWZmODU5NThiZDMyMmM4MDY1YTZhZTBjMWM4MTZmYTgzZjM2YzdhdNA7MQ==: 00:15:29.446 14:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YmFjZjI3MmM1MWEyYTRmNjU4YWExYThlN2Y4ZTg5OTTLxYpf: --dhchap-ctrl-secret DHHC-1:02:OTc2YWE2ZjM5ZWZmODU5NThiZDMyMmM4MDY1YTZhZTBjMWM4MTZmYTgzZjM2YzdhdNA7MQ==: 00:15:30.378 14:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:30.378 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:30.378 14:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:30.378 14:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.378 14:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.378 14:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.378 14:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:30.378 14:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:30.378 14:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:30.636 14:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:15:30.636 14:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:30.636 14:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:30.636 14:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:30.636 14:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:30.636 14:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:30.636 14:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:30.636 14:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.637 14:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.637 14:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.637 14:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:30.637 14:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:30.637 14:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:30.925 00:15:30.925 14:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:30.925 14:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:30.925 14:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:31.223 14:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:31.223 14:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:31.223 14:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.223 14:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.223 14:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.223 14:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:31.223 { 00:15:31.223 "cntlid": 77, 00:15:31.223 "qid": 0, 00:15:31.223 "state": "enabled", 00:15:31.223 "thread": "nvmf_tgt_poll_group_000", 00:15:31.223 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:31.223 "listen_address": { 00:15:31.223 "trtype": "TCP", 00:15:31.223 "adrfam": "IPv4", 00:15:31.223 "traddr": "10.0.0.2", 00:15:31.223 "trsvcid": "4420" 00:15:31.223 }, 00:15:31.223 "peer_address": { 00:15:31.223 "trtype": "TCP", 00:15:31.223 "adrfam": "IPv4", 00:15:31.223 "traddr": "10.0.0.1", 00:15:31.223 "trsvcid": "38900" 00:15:31.223 }, 00:15:31.223 "auth": { 00:15:31.223 "state": "completed", 00:15:31.223 "digest": "sha384", 00:15:31.223 "dhgroup": "ffdhe4096" 00:15:31.223 } 00:15:31.223 } 00:15:31.223 ]' 00:15:31.223 14:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:31.224 14:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:31.224 14:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:31.481 14:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:31.481 14:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:31.481 14:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:31.481 14:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:31.481 14:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:31.738 14:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmIyYWUxYjhlNmQxN2I4ZWEyZTFmNWMxZWQ0OWMxMjRlYjIxODY0MzZhZTJjMzRkY7WfSg==: --dhchap-ctrl-secret DHHC-1:01:MWFlNjI2OTk2N2NmZjJkMjczNGNkMTQxODU1M2RjOTS+229+: 00:15:31.738 14:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YmIyYWUxYjhlNmQxN2I4ZWEyZTFmNWMxZWQ0OWMxMjRlYjIxODY0MzZhZTJjMzRkY7WfSg==: --dhchap-ctrl-secret DHHC-1:01:MWFlNjI2OTk2N2NmZjJkMjczNGNkMTQxODU1M2RjOTS+229+: 00:15:32.669 14:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:32.669 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:32.669 14:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:32.669 14:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.669 14:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.669 14:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.669 14:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:32.669 14:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:32.669 14:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:32.925 14:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:15:32.925 14:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:32.925 14:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:32.925 14:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:32.925 14:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:32.925 14:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:32.925 14:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:32.925 14:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.925 14:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.925 14:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.925 14:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:32.925 14:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:32.925 14:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:33.183 00:15:33.183 14:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:33.183 14:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:33.183 14:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:33.440 14:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:33.698 14:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:33.698 14:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.698 14:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.698 14:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.698 14:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:33.698 { 00:15:33.698 "cntlid": 79, 00:15:33.698 "qid": 0, 00:15:33.698 "state": "enabled", 00:15:33.698 "thread": "nvmf_tgt_poll_group_000", 00:15:33.698 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:33.698 "listen_address": { 00:15:33.698 "trtype": "TCP", 00:15:33.698 "adrfam": "IPv4", 00:15:33.698 "traddr": "10.0.0.2", 00:15:33.698 "trsvcid": "4420" 00:15:33.698 }, 00:15:33.698 "peer_address": { 00:15:33.698 "trtype": "TCP", 00:15:33.698 "adrfam": "IPv4", 00:15:33.698 "traddr": "10.0.0.1", 00:15:33.698 "trsvcid": "38930" 00:15:33.698 }, 00:15:33.698 "auth": { 00:15:33.698 "state": "completed", 00:15:33.698 "digest": "sha384", 00:15:33.698 "dhgroup": "ffdhe4096" 00:15:33.698 } 00:15:33.698 } 00:15:33.698 ]' 00:15:33.698 14:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:33.698 14:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:33.698 14:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:33.698 14:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:33.698 14:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:33.698 14:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:33.698 14:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:33.698 14:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:33.955 14:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTIzMzZlOGQzNTY3MmZkMDM2ZmI4NDc3ZWUzYTJmYjc0ZTc1YjA1ODZjYWJjODQ5NmE1MWJhODk5YTk0NzM5Mfa5be0=: 00:15:33.955 14:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZTIzMzZlOGQzNTY3MmZkMDM2ZmI4NDc3ZWUzYTJmYjc0ZTc1YjA1ODZjYWJjODQ5NmE1MWJhODk5YTk0NzM5Mfa5be0=: 00:15:34.888 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:34.888 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:34.888 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:34.888 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.888 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.888 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.888 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:34.888 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:34.888 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:34.888 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:35.145 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:15:35.145 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:35.145 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:35.145 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:35.145 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:35.145 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:35.145 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:35.145 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.145 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.145 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.145 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:35.145 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:35.145 14:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:35.710 00:15:35.710 14:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:35.710 14:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:35.710 14:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:35.968 14:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:35.968 14:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:35.968 14:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.968 14:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.968 14:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.968 14:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:35.968 { 00:15:35.968 "cntlid": 81, 00:15:35.968 "qid": 0, 00:15:35.968 "state": "enabled", 00:15:35.968 "thread": "nvmf_tgt_poll_group_000", 00:15:35.968 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:35.968 "listen_address": { 00:15:35.968 "trtype": "TCP", 00:15:35.968 "adrfam": "IPv4", 00:15:35.968 "traddr": "10.0.0.2", 00:15:35.968 "trsvcid": "4420" 00:15:35.968 }, 00:15:35.968 "peer_address": { 00:15:35.968 "trtype": "TCP", 00:15:35.968 "adrfam": "IPv4", 00:15:35.968 "traddr": "10.0.0.1", 00:15:35.968 "trsvcid": "38964" 00:15:35.968 }, 00:15:35.968 "auth": { 00:15:35.968 "state": "completed", 00:15:35.968 "digest": "sha384", 00:15:35.968 "dhgroup": "ffdhe6144" 00:15:35.968 } 00:15:35.968 } 00:15:35.968 ]' 00:15:35.968 14:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:35.968 14:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:35.968 14:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:35.968 14:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:35.968 14:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:35.968 14:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:35.968 14:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:35.968 14:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:36.226 14:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDhmZTgyMWRkNDFjNTJhNjYwZTI3YmE5MzNhZmI3ZDJjZmYyNmQ5ODg4Yzk1MmNjJ1aojg==: --dhchap-ctrl-secret DHHC-1:03:MGEwYjdkZWNjY2Y1YmJkYWY2ZGMzZGI1MTZmYmU5YTMyZGRhNTdkNmU5ZjNjZTRhMDU5NjRlMjY5YWY3Y2QxZit99ZI=: 00:15:36.483 14:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZDhmZTgyMWRkNDFjNTJhNjYwZTI3YmE5MzNhZmI3ZDJjZmYyNmQ5ODg4Yzk1MmNjJ1aojg==: --dhchap-ctrl-secret DHHC-1:03:MGEwYjdkZWNjY2Y1YmJkYWY2ZGMzZGI1MTZmYmU5YTMyZGRhNTdkNmU5ZjNjZTRhMDU5NjRlMjY5YWY3Y2QxZit99ZI=: 00:15:37.417 14:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:37.417 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:37.417 14:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:37.417 14:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.417 14:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.417 14:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.417 14:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:37.417 14:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:37.417 14:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:37.417 14:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:15:37.417 14:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:37.417 14:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:37.417 14:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:37.417 14:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:37.417 14:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:37.417 14:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:37.417 14:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.417 14:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.675 14:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.675 14:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:37.675 14:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:37.675 14:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:38.241 00:15:38.241 14:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:38.241 14:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:38.241 14:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:38.499 14:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:38.499 14:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:38.499 14:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.499 14:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.499 14:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.499 14:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:38.499 { 00:15:38.499 "cntlid": 83, 00:15:38.499 "qid": 0, 00:15:38.499 "state": "enabled", 00:15:38.499 "thread": "nvmf_tgt_poll_group_000", 00:15:38.499 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:38.499 "listen_address": { 00:15:38.499 "trtype": "TCP", 00:15:38.499 "adrfam": "IPv4", 00:15:38.499 "traddr": "10.0.0.2", 00:15:38.499 "trsvcid": "4420" 00:15:38.499 }, 00:15:38.499 "peer_address": { 00:15:38.499 "trtype": "TCP", 00:15:38.499 "adrfam": "IPv4", 00:15:38.499 "traddr": "10.0.0.1", 00:15:38.499 "trsvcid": "38990" 00:15:38.499 }, 00:15:38.499 "auth": { 00:15:38.499 "state": "completed", 00:15:38.499 "digest": "sha384", 00:15:38.499 "dhgroup": "ffdhe6144" 00:15:38.499 } 00:15:38.499 } 00:15:38.499 ]' 00:15:38.499 14:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:38.499 14:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:38.499 14:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:38.499 14:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:38.499 14:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:38.499 14:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:38.499 14:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:38.499 14:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:38.757 14:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmFjZjI3MmM1MWEyYTRmNjU4YWExYThlN2Y4ZTg5OTTLxYpf: --dhchap-ctrl-secret DHHC-1:02:OTc2YWE2ZjM5ZWZmODU5NThiZDMyMmM4MDY1YTZhZTBjMWM4MTZmYTgzZjM2YzdhdNA7MQ==: 00:15:38.757 14:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YmFjZjI3MmM1MWEyYTRmNjU4YWExYThlN2Y4ZTg5OTTLxYpf: --dhchap-ctrl-secret DHHC-1:02:OTc2YWE2ZjM5ZWZmODU5NThiZDMyMmM4MDY1YTZhZTBjMWM4MTZmYTgzZjM2YzdhdNA7MQ==: 00:15:39.689 14:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:39.689 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:39.689 14:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:39.689 14:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.689 14:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.689 14:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.689 14:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:39.689 14:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:39.689 14:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:39.947 14:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:15:39.947 14:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:39.947 14:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:39.947 14:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:39.947 14:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:39.947 14:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:39.947 14:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:39.947 14:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.947 14:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.947 14:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.947 14:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:39.947 14:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:39.947 14:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:40.512 00:15:40.512 14:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:40.512 14:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:40.512 14:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:40.770 14:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:40.770 14:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:40.770 14:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.770 14:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.770 14:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.770 14:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:40.770 { 00:15:40.770 "cntlid": 85, 00:15:40.770 "qid": 0, 00:15:40.770 "state": "enabled", 00:15:40.770 "thread": "nvmf_tgt_poll_group_000", 00:15:40.770 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:40.770 "listen_address": { 00:15:40.770 "trtype": "TCP", 00:15:40.770 "adrfam": "IPv4", 00:15:40.770 "traddr": "10.0.0.2", 00:15:40.770 "trsvcid": "4420" 00:15:40.770 }, 00:15:40.770 "peer_address": { 00:15:40.770 "trtype": "TCP", 00:15:40.770 "adrfam": "IPv4", 00:15:40.770 "traddr": "10.0.0.1", 00:15:40.770 "trsvcid": "35016" 00:15:40.770 }, 00:15:40.770 "auth": { 00:15:40.770 "state": "completed", 00:15:40.770 "digest": "sha384", 00:15:40.770 "dhgroup": "ffdhe6144" 00:15:40.770 } 00:15:40.770 } 00:15:40.770 ]' 00:15:40.770 14:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:40.770 14:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:40.770 14:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:40.770 14:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:40.770 14:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:41.028 14:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:41.028 14:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:41.028 14:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:41.285 14:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmIyYWUxYjhlNmQxN2I4ZWEyZTFmNWMxZWQ0OWMxMjRlYjIxODY0MzZhZTJjMzRkY7WfSg==: --dhchap-ctrl-secret DHHC-1:01:MWFlNjI2OTk2N2NmZjJkMjczNGNkMTQxODU1M2RjOTS+229+: 00:15:41.286 14:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YmIyYWUxYjhlNmQxN2I4ZWEyZTFmNWMxZWQ0OWMxMjRlYjIxODY0MzZhZTJjMzRkY7WfSg==: --dhchap-ctrl-secret DHHC-1:01:MWFlNjI2OTk2N2NmZjJkMjczNGNkMTQxODU1M2RjOTS+229+: 00:15:42.219 14:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:42.219 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:42.219 14:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:42.219 14:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.219 14:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.219 14:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.219 14:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:42.219 14:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:42.219 14:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:42.476 14:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:15:42.476 14:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:42.476 14:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:42.476 14:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:42.476 14:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:42.476 14:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:42.476 14:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:42.476 14:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.476 14:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.476 14:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.476 14:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:42.476 14:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:42.476 14:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:43.041 00:15:43.041 14:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:43.041 14:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:43.041 14:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:43.299 14:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:43.299 14:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:43.299 14:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.299 14:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.299 14:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.299 14:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:43.299 { 00:15:43.299 "cntlid": 87, 00:15:43.299 "qid": 0, 00:15:43.299 "state": "enabled", 00:15:43.299 "thread": "nvmf_tgt_poll_group_000", 00:15:43.299 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:43.299 "listen_address": { 00:15:43.299 "trtype": "TCP", 00:15:43.299 "adrfam": "IPv4", 00:15:43.299 "traddr": "10.0.0.2", 00:15:43.299 "trsvcid": "4420" 00:15:43.299 }, 00:15:43.299 "peer_address": { 00:15:43.299 "trtype": "TCP", 00:15:43.299 "adrfam": "IPv4", 00:15:43.299 "traddr": "10.0.0.1", 00:15:43.299 "trsvcid": "35040" 00:15:43.299 }, 00:15:43.299 "auth": { 00:15:43.299 "state": "completed", 00:15:43.299 "digest": "sha384", 00:15:43.299 "dhgroup": "ffdhe6144" 00:15:43.299 } 00:15:43.299 } 00:15:43.299 ]' 00:15:43.299 14:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:43.299 14:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:43.299 14:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:43.299 14:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:43.299 14:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:43.299 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:43.299 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:43.299 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:43.557 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTIzMzZlOGQzNTY3MmZkMDM2ZmI4NDc3ZWUzYTJmYjc0ZTc1YjA1ODZjYWJjODQ5NmE1MWJhODk5YTk0NzM5Mfa5be0=: 00:15:43.557 14:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZTIzMzZlOGQzNTY3MmZkMDM2ZmI4NDc3ZWUzYTJmYjc0ZTc1YjA1ODZjYWJjODQ5NmE1MWJhODk5YTk0NzM5Mfa5be0=: 00:15:44.489 14:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:44.489 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:44.489 14:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:44.489 14:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.489 14:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.489 14:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.489 14:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:44.489 14:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:44.489 14:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:44.489 14:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:44.747 14:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:15:44.747 14:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:44.747 14:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:44.747 14:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:44.747 14:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:44.747 14:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:44.747 14:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:44.747 14:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.747 14:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.747 14:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.747 14:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:44.747 14:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:44.747 14:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:45.680 00:15:45.680 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:45.680 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:45.680 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:45.937 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:45.937 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:45.937 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.937 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.937 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.937 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:45.937 { 00:15:45.937 "cntlid": 89, 00:15:45.937 "qid": 0, 00:15:45.937 "state": "enabled", 00:15:45.937 "thread": "nvmf_tgt_poll_group_000", 00:15:45.937 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:45.937 "listen_address": { 00:15:45.937 "trtype": "TCP", 00:15:45.937 "adrfam": "IPv4", 00:15:45.937 "traddr": "10.0.0.2", 00:15:45.937 "trsvcid": "4420" 00:15:45.937 }, 00:15:45.937 "peer_address": { 00:15:45.937 "trtype": "TCP", 00:15:45.937 "adrfam": "IPv4", 00:15:45.937 "traddr": "10.0.0.1", 00:15:45.937 "trsvcid": "35062" 00:15:45.937 }, 00:15:45.937 "auth": { 00:15:45.937 "state": "completed", 00:15:45.937 "digest": "sha384", 00:15:45.937 "dhgroup": "ffdhe8192" 00:15:45.937 } 00:15:45.937 } 00:15:45.937 ]' 00:15:45.937 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:45.937 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:45.937 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:46.195 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:46.195 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:46.195 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:46.195 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:46.195 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:46.454 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDhmZTgyMWRkNDFjNTJhNjYwZTI3YmE5MzNhZmI3ZDJjZmYyNmQ5ODg4Yzk1MmNjJ1aojg==: --dhchap-ctrl-secret DHHC-1:03:MGEwYjdkZWNjY2Y1YmJkYWY2ZGMzZGI1MTZmYmU5YTMyZGRhNTdkNmU5ZjNjZTRhMDU5NjRlMjY5YWY3Y2QxZit99ZI=: 00:15:46.454 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZDhmZTgyMWRkNDFjNTJhNjYwZTI3YmE5MzNhZmI3ZDJjZmYyNmQ5ODg4Yzk1MmNjJ1aojg==: --dhchap-ctrl-secret DHHC-1:03:MGEwYjdkZWNjY2Y1YmJkYWY2ZGMzZGI1MTZmYmU5YTMyZGRhNTdkNmU5ZjNjZTRhMDU5NjRlMjY5YWY3Y2QxZit99ZI=: 00:15:47.387 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:47.387 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:47.387 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:47.387 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.387 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.387 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.387 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:47.387 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:47.387 14:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:47.646 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:15:47.646 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:47.646 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:47.646 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:47.646 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:47.646 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:47.646 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:47.646 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.646 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.646 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.646 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:47.646 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:47.646 14:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:48.579 00:15:48.579 14:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:48.579 14:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:48.579 14:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:48.837 14:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:48.837 14:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:48.837 14:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.837 14:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.837 14:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.837 14:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:48.837 { 00:15:48.837 "cntlid": 91, 00:15:48.837 "qid": 0, 00:15:48.837 "state": "enabled", 00:15:48.837 "thread": "nvmf_tgt_poll_group_000", 00:15:48.837 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:48.837 "listen_address": { 00:15:48.837 "trtype": "TCP", 00:15:48.837 "adrfam": "IPv4", 00:15:48.837 "traddr": "10.0.0.2", 00:15:48.837 "trsvcid": "4420" 00:15:48.837 }, 00:15:48.837 "peer_address": { 00:15:48.837 "trtype": "TCP", 00:15:48.837 "adrfam": "IPv4", 00:15:48.837 "traddr": "10.0.0.1", 00:15:48.837 "trsvcid": "35086" 00:15:48.837 }, 00:15:48.837 "auth": { 00:15:48.837 "state": "completed", 00:15:48.837 "digest": "sha384", 00:15:48.837 "dhgroup": "ffdhe8192" 00:15:48.837 } 00:15:48.837 } 00:15:48.837 ]' 00:15:48.837 14:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:48.837 14:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:48.837 14:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:48.837 14:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:48.837 14:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:48.837 14:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:48.837 14:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:48.837 14:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:49.095 14:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmFjZjI3MmM1MWEyYTRmNjU4YWExYThlN2Y4ZTg5OTTLxYpf: --dhchap-ctrl-secret DHHC-1:02:OTc2YWE2ZjM5ZWZmODU5NThiZDMyMmM4MDY1YTZhZTBjMWM4MTZmYTgzZjM2YzdhdNA7MQ==: 00:15:49.095 14:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YmFjZjI3MmM1MWEyYTRmNjU4YWExYThlN2Y4ZTg5OTTLxYpf: --dhchap-ctrl-secret DHHC-1:02:OTc2YWE2ZjM5ZWZmODU5NThiZDMyMmM4MDY1YTZhZTBjMWM4MTZmYTgzZjM2YzdhdNA7MQ==: 00:15:50.027 14:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:50.027 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:50.027 14:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:50.027 14:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.027 14:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.027 14:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.027 14:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:50.027 14:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:50.027 14:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:50.285 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:15:50.285 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:50.285 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:50.285 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:50.285 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:50.285 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:50.285 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:50.285 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.285 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.285 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.285 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:50.285 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:50.285 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:51.218 00:15:51.218 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:51.218 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:51.218 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:51.476 14:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:51.476 14:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:51.476 14:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.476 14:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.476 14:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.476 14:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:51.476 { 00:15:51.476 "cntlid": 93, 00:15:51.476 "qid": 0, 00:15:51.476 "state": "enabled", 00:15:51.476 "thread": "nvmf_tgt_poll_group_000", 00:15:51.476 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:51.476 "listen_address": { 00:15:51.476 "trtype": "TCP", 00:15:51.476 "adrfam": "IPv4", 00:15:51.476 "traddr": "10.0.0.2", 00:15:51.476 "trsvcid": "4420" 00:15:51.476 }, 00:15:51.476 "peer_address": { 00:15:51.476 "trtype": "TCP", 00:15:51.476 "adrfam": "IPv4", 00:15:51.476 "traddr": "10.0.0.1", 00:15:51.476 "trsvcid": "41176" 00:15:51.476 }, 00:15:51.476 "auth": { 00:15:51.476 "state": "completed", 00:15:51.476 "digest": "sha384", 00:15:51.476 "dhgroup": "ffdhe8192" 00:15:51.476 } 00:15:51.476 } 00:15:51.476 ]' 00:15:51.476 14:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:51.476 14:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:51.476 14:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:51.476 14:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:51.476 14:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:51.732 14:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:51.732 14:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:51.732 14:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:51.989 14:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmIyYWUxYjhlNmQxN2I4ZWEyZTFmNWMxZWQ0OWMxMjRlYjIxODY0MzZhZTJjMzRkY7WfSg==: --dhchap-ctrl-secret DHHC-1:01:MWFlNjI2OTk2N2NmZjJkMjczNGNkMTQxODU1M2RjOTS+229+: 00:15:51.989 14:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YmIyYWUxYjhlNmQxN2I4ZWEyZTFmNWMxZWQ0OWMxMjRlYjIxODY0MzZhZTJjMzRkY7WfSg==: --dhchap-ctrl-secret DHHC-1:01:MWFlNjI2OTk2N2NmZjJkMjczNGNkMTQxODU1M2RjOTS+229+: 00:15:52.921 14:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:52.921 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:52.921 14:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:52.921 14:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.921 14:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.921 14:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.921 14:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:52.921 14:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:52.921 14:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:53.178 14:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:15:53.178 14:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:53.178 14:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:53.179 14:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:53.179 14:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:53.179 14:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:53.179 14:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:53.179 14:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.179 14:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.179 14:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.179 14:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:53.179 14:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:53.179 14:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:54.112 00:15:54.112 14:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:54.112 14:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:54.112 14:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:54.369 14:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:54.369 14:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:54.369 14:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.369 14:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.369 14:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.369 14:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:54.369 { 00:15:54.369 "cntlid": 95, 00:15:54.369 "qid": 0, 00:15:54.369 "state": "enabled", 00:15:54.369 "thread": "nvmf_tgt_poll_group_000", 00:15:54.369 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:54.369 "listen_address": { 00:15:54.369 "trtype": "TCP", 00:15:54.369 "adrfam": "IPv4", 00:15:54.369 "traddr": "10.0.0.2", 00:15:54.369 "trsvcid": "4420" 00:15:54.369 }, 00:15:54.369 "peer_address": { 00:15:54.369 "trtype": "TCP", 00:15:54.369 "adrfam": "IPv4", 00:15:54.369 "traddr": "10.0.0.1", 00:15:54.369 "trsvcid": "41196" 00:15:54.369 }, 00:15:54.369 "auth": { 00:15:54.369 "state": "completed", 00:15:54.369 "digest": "sha384", 00:15:54.369 "dhgroup": "ffdhe8192" 00:15:54.369 } 00:15:54.369 } 00:15:54.369 ]' 00:15:54.369 14:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:54.369 14:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:54.369 14:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:54.369 14:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:54.369 14:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:54.369 14:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:54.369 14:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:54.369 14:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:54.627 14:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTIzMzZlOGQzNTY3MmZkMDM2ZmI4NDc3ZWUzYTJmYjc0ZTc1YjA1ODZjYWJjODQ5NmE1MWJhODk5YTk0NzM5Mfa5be0=: 00:15:54.627 14:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZTIzMzZlOGQzNTY3MmZkMDM2ZmI4NDc3ZWUzYTJmYjc0ZTc1YjA1ODZjYWJjODQ5NmE1MWJhODk5YTk0NzM5Mfa5be0=: 00:15:55.560 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:55.560 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:55.560 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:55.560 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.560 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.560 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.560 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:55.560 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:55.560 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:55.560 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:55.560 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:55.817 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:15:55.818 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:55.818 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:55.818 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:55.818 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:55.818 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:55.818 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:55.818 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.818 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.818 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.818 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:55.818 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:55.818 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:56.075 00:15:56.075 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:56.075 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:56.075 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:56.640 14:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:56.640 14:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:56.640 14:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.640 14:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.640 14:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.640 14:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:56.640 { 00:15:56.640 "cntlid": 97, 00:15:56.640 "qid": 0, 00:15:56.640 "state": "enabled", 00:15:56.640 "thread": "nvmf_tgt_poll_group_000", 00:15:56.640 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:56.640 "listen_address": { 00:15:56.640 "trtype": "TCP", 00:15:56.640 "adrfam": "IPv4", 00:15:56.640 "traddr": "10.0.0.2", 00:15:56.640 "trsvcid": "4420" 00:15:56.640 }, 00:15:56.640 "peer_address": { 00:15:56.640 "trtype": "TCP", 00:15:56.640 "adrfam": "IPv4", 00:15:56.640 "traddr": "10.0.0.1", 00:15:56.640 "trsvcid": "41236" 00:15:56.640 }, 00:15:56.640 "auth": { 00:15:56.640 "state": "completed", 00:15:56.640 "digest": "sha512", 00:15:56.640 "dhgroup": "null" 00:15:56.640 } 00:15:56.640 } 00:15:56.640 ]' 00:15:56.640 14:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:56.640 14:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:56.640 14:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:56.640 14:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:56.641 14:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:56.641 14:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:56.641 14:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:56.641 14:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:56.898 14:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDhmZTgyMWRkNDFjNTJhNjYwZTI3YmE5MzNhZmI3ZDJjZmYyNmQ5ODg4Yzk1MmNjJ1aojg==: --dhchap-ctrl-secret DHHC-1:03:MGEwYjdkZWNjY2Y1YmJkYWY2ZGMzZGI1MTZmYmU5YTMyZGRhNTdkNmU5ZjNjZTRhMDU5NjRlMjY5YWY3Y2QxZit99ZI=: 00:15:56.898 14:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZDhmZTgyMWRkNDFjNTJhNjYwZTI3YmE5MzNhZmI3ZDJjZmYyNmQ5ODg4Yzk1MmNjJ1aojg==: --dhchap-ctrl-secret DHHC-1:03:MGEwYjdkZWNjY2Y1YmJkYWY2ZGMzZGI1MTZmYmU5YTMyZGRhNTdkNmU5ZjNjZTRhMDU5NjRlMjY5YWY3Y2QxZit99ZI=: 00:15:57.833 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:57.833 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:57.833 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:57.833 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.833 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.833 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.833 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:57.833 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:57.833 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:58.091 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:15:58.091 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:58.091 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:58.091 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:58.091 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:58.091 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:58.091 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:58.091 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.091 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.091 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.091 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:58.091 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:58.091 14:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:58.349 00:15:58.349 14:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:58.349 14:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:58.349 14:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:58.606 14:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:58.606 14:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:58.606 14:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.606 14:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.606 14:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.606 14:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:58.606 { 00:15:58.606 "cntlid": 99, 00:15:58.606 "qid": 0, 00:15:58.606 "state": "enabled", 00:15:58.606 "thread": "nvmf_tgt_poll_group_000", 00:15:58.606 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:58.606 "listen_address": { 00:15:58.606 "trtype": "TCP", 00:15:58.606 "adrfam": "IPv4", 00:15:58.606 "traddr": "10.0.0.2", 00:15:58.606 "trsvcid": "4420" 00:15:58.606 }, 00:15:58.606 "peer_address": { 00:15:58.606 "trtype": "TCP", 00:15:58.606 "adrfam": "IPv4", 00:15:58.606 "traddr": "10.0.0.1", 00:15:58.606 "trsvcid": "41264" 00:15:58.606 }, 00:15:58.606 "auth": { 00:15:58.606 "state": "completed", 00:15:58.606 "digest": "sha512", 00:15:58.606 "dhgroup": "null" 00:15:58.606 } 00:15:58.606 } 00:15:58.606 ]' 00:15:58.606 14:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:58.864 14:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:58.864 14:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:58.864 14:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:58.864 14:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:58.864 14:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:58.864 14:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:58.864 14:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:59.122 14:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmFjZjI3MmM1MWEyYTRmNjU4YWExYThlN2Y4ZTg5OTTLxYpf: --dhchap-ctrl-secret DHHC-1:02:OTc2YWE2ZjM5ZWZmODU5NThiZDMyMmM4MDY1YTZhZTBjMWM4MTZmYTgzZjM2YzdhdNA7MQ==: 00:15:59.122 14:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YmFjZjI3MmM1MWEyYTRmNjU4YWExYThlN2Y4ZTg5OTTLxYpf: --dhchap-ctrl-secret DHHC-1:02:OTc2YWE2ZjM5ZWZmODU5NThiZDMyMmM4MDY1YTZhZTBjMWM4MTZmYTgzZjM2YzdhdNA7MQ==: 00:16:00.054 14:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:00.054 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:00.054 14:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:00.054 14:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.054 14:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.054 14:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.054 14:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:00.054 14:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:00.054 14:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:00.311 14:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:16:00.311 14:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:00.311 14:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:00.311 14:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:00.311 14:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:00.311 14:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:00.311 14:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:00.311 14:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.311 14:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.311 14:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.311 14:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:00.311 14:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:00.311 14:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:00.569 00:16:00.569 14:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:00.569 14:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:00.569 14:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:00.890 14:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:00.890 14:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:00.890 14:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.890 14:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.890 14:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.890 14:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:00.890 { 00:16:00.890 "cntlid": 101, 00:16:00.890 "qid": 0, 00:16:00.890 "state": "enabled", 00:16:00.890 "thread": "nvmf_tgt_poll_group_000", 00:16:00.890 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:00.890 "listen_address": { 00:16:00.890 "trtype": "TCP", 00:16:00.890 "adrfam": "IPv4", 00:16:00.890 "traddr": "10.0.0.2", 00:16:00.890 "trsvcid": "4420" 00:16:00.890 }, 00:16:00.890 "peer_address": { 00:16:00.890 "trtype": "TCP", 00:16:00.890 "adrfam": "IPv4", 00:16:00.890 "traddr": "10.0.0.1", 00:16:00.890 "trsvcid": "46866" 00:16:00.890 }, 00:16:00.890 "auth": { 00:16:00.890 "state": "completed", 00:16:00.890 "digest": "sha512", 00:16:00.890 "dhgroup": "null" 00:16:00.890 } 00:16:00.890 } 00:16:00.890 ]' 00:16:00.890 14:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:00.890 14:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:00.890 14:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:01.163 14:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:01.163 14:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:01.163 14:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:01.163 14:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:01.163 14:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:01.421 14:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmIyYWUxYjhlNmQxN2I4ZWEyZTFmNWMxZWQ0OWMxMjRlYjIxODY0MzZhZTJjMzRkY7WfSg==: --dhchap-ctrl-secret DHHC-1:01:MWFlNjI2OTk2N2NmZjJkMjczNGNkMTQxODU1M2RjOTS+229+: 00:16:01.421 14:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YmIyYWUxYjhlNmQxN2I4ZWEyZTFmNWMxZWQ0OWMxMjRlYjIxODY0MzZhZTJjMzRkY7WfSg==: --dhchap-ctrl-secret DHHC-1:01:MWFlNjI2OTk2N2NmZjJkMjczNGNkMTQxODU1M2RjOTS+229+: 00:16:02.353 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:02.353 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:02.353 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:02.353 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.353 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.353 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.353 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:02.353 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:02.353 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:02.610 14:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:16:02.610 14:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:02.610 14:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:02.610 14:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:02.610 14:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:02.610 14:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:02.610 14:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:02.610 14:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.610 14:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.610 14:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.610 14:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:02.610 14:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:02.610 14:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:02.867 00:16:02.867 14:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:02.867 14:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:02.867 14:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:03.125 14:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.125 14:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:03.125 14:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.125 14:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.125 14:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.125 14:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:03.125 { 00:16:03.125 "cntlid": 103, 00:16:03.125 "qid": 0, 00:16:03.125 "state": "enabled", 00:16:03.125 "thread": "nvmf_tgt_poll_group_000", 00:16:03.125 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:03.125 "listen_address": { 00:16:03.125 "trtype": "TCP", 00:16:03.125 "adrfam": "IPv4", 00:16:03.125 "traddr": "10.0.0.2", 00:16:03.125 "trsvcid": "4420" 00:16:03.125 }, 00:16:03.125 "peer_address": { 00:16:03.125 "trtype": "TCP", 00:16:03.125 "adrfam": "IPv4", 00:16:03.125 "traddr": "10.0.0.1", 00:16:03.125 "trsvcid": "46882" 00:16:03.125 }, 00:16:03.125 "auth": { 00:16:03.125 "state": "completed", 00:16:03.125 "digest": "sha512", 00:16:03.125 "dhgroup": "null" 00:16:03.125 } 00:16:03.125 } 00:16:03.125 ]' 00:16:03.125 14:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:03.125 14:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:03.125 14:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:03.125 14:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:03.125 14:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:03.383 14:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:03.383 14:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:03.383 14:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:03.641 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTIzMzZlOGQzNTY3MmZkMDM2ZmI4NDc3ZWUzYTJmYjc0ZTc1YjA1ODZjYWJjODQ5NmE1MWJhODk5YTk0NzM5Mfa5be0=: 00:16:03.641 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZTIzMzZlOGQzNTY3MmZkMDM2ZmI4NDc3ZWUzYTJmYjc0ZTc1YjA1ODZjYWJjODQ5NmE1MWJhODk5YTk0NzM5Mfa5be0=: 00:16:04.574 14:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:04.574 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:04.574 14:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:04.574 14:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.574 14:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.574 14:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.574 14:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:04.574 14:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:04.574 14:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:04.574 14:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:04.832 14:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:16:04.832 14:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:04.832 14:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:04.832 14:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:04.832 14:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:04.832 14:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:04.832 14:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:04.832 14:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.832 14:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.832 14:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.832 14:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:04.832 14:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:04.832 14:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:05.090 00:16:05.090 14:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:05.090 14:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:05.090 14:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:05.348 14:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:05.348 14:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:05.348 14:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.348 14:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.348 14:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.348 14:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:05.348 { 00:16:05.348 "cntlid": 105, 00:16:05.348 "qid": 0, 00:16:05.348 "state": "enabled", 00:16:05.348 "thread": "nvmf_tgt_poll_group_000", 00:16:05.348 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:05.348 "listen_address": { 00:16:05.348 "trtype": "TCP", 00:16:05.348 "adrfam": "IPv4", 00:16:05.348 "traddr": "10.0.0.2", 00:16:05.348 "trsvcid": "4420" 00:16:05.348 }, 00:16:05.348 "peer_address": { 00:16:05.348 "trtype": "TCP", 00:16:05.348 "adrfam": "IPv4", 00:16:05.348 "traddr": "10.0.0.1", 00:16:05.348 "trsvcid": "46904" 00:16:05.348 }, 00:16:05.348 "auth": { 00:16:05.348 "state": "completed", 00:16:05.348 "digest": "sha512", 00:16:05.348 "dhgroup": "ffdhe2048" 00:16:05.348 } 00:16:05.348 } 00:16:05.348 ]' 00:16:05.348 14:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:05.348 14:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:05.348 14:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:05.348 14:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:05.348 14:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:05.606 14:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:05.606 14:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:05.606 14:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:05.865 14:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDhmZTgyMWRkNDFjNTJhNjYwZTI3YmE5MzNhZmI3ZDJjZmYyNmQ5ODg4Yzk1MmNjJ1aojg==: --dhchap-ctrl-secret DHHC-1:03:MGEwYjdkZWNjY2Y1YmJkYWY2ZGMzZGI1MTZmYmU5YTMyZGRhNTdkNmU5ZjNjZTRhMDU5NjRlMjY5YWY3Y2QxZit99ZI=: 00:16:05.865 14:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZDhmZTgyMWRkNDFjNTJhNjYwZTI3YmE5MzNhZmI3ZDJjZmYyNmQ5ODg4Yzk1MmNjJ1aojg==: --dhchap-ctrl-secret DHHC-1:03:MGEwYjdkZWNjY2Y1YmJkYWY2ZGMzZGI1MTZmYmU5YTMyZGRhNTdkNmU5ZjNjZTRhMDU5NjRlMjY5YWY3Y2QxZit99ZI=: 00:16:06.800 14:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:06.800 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:06.800 14:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:06.800 14:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.800 14:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.800 14:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.800 14:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:06.800 14:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:06.800 14:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:07.057 14:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:16:07.057 14:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:07.057 14:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:07.057 14:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:07.057 14:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:07.057 14:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:07.057 14:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:07.057 14:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.057 14:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.057 14:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.057 14:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:07.057 14:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:07.058 14:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:07.316 00:16:07.316 14:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:07.316 14:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:07.316 14:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:07.573 14:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:07.573 14:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:07.573 14:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.573 14:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.573 14:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.573 14:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:07.573 { 00:16:07.573 "cntlid": 107, 00:16:07.573 "qid": 0, 00:16:07.573 "state": "enabled", 00:16:07.573 "thread": "nvmf_tgt_poll_group_000", 00:16:07.573 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:07.573 "listen_address": { 00:16:07.573 "trtype": "TCP", 00:16:07.573 "adrfam": "IPv4", 00:16:07.573 "traddr": "10.0.0.2", 00:16:07.573 "trsvcid": "4420" 00:16:07.573 }, 00:16:07.573 "peer_address": { 00:16:07.573 "trtype": "TCP", 00:16:07.573 "adrfam": "IPv4", 00:16:07.573 "traddr": "10.0.0.1", 00:16:07.573 "trsvcid": "46932" 00:16:07.573 }, 00:16:07.573 "auth": { 00:16:07.573 "state": "completed", 00:16:07.573 "digest": "sha512", 00:16:07.573 "dhgroup": "ffdhe2048" 00:16:07.573 } 00:16:07.573 } 00:16:07.573 ]' 00:16:07.573 14:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:07.573 14:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:07.573 14:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:07.831 14:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:07.831 14:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:07.831 14:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:07.831 14:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:07.831 14:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:08.089 14:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmFjZjI3MmM1MWEyYTRmNjU4YWExYThlN2Y4ZTg5OTTLxYpf: --dhchap-ctrl-secret DHHC-1:02:OTc2YWE2ZjM5ZWZmODU5NThiZDMyMmM4MDY1YTZhZTBjMWM4MTZmYTgzZjM2YzdhdNA7MQ==: 00:16:08.089 14:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YmFjZjI3MmM1MWEyYTRmNjU4YWExYThlN2Y4ZTg5OTTLxYpf: --dhchap-ctrl-secret DHHC-1:02:OTc2YWE2ZjM5ZWZmODU5NThiZDMyMmM4MDY1YTZhZTBjMWM4MTZmYTgzZjM2YzdhdNA7MQ==: 00:16:09.022 14:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:09.022 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:09.022 14:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:09.022 14:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.022 14:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.022 14:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.022 14:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:09.022 14:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:09.022 14:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:09.280 14:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:16:09.280 14:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:09.280 14:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:09.280 14:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:09.280 14:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:09.280 14:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:09.280 14:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:09.280 14:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.280 14:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.280 14:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.280 14:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:09.280 14:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:09.280 14:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:09.538 00:16:09.538 14:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:09.538 14:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:09.538 14:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:09.796 14:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.796 14:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:09.796 14:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.796 14:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.796 14:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.796 14:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:09.796 { 00:16:09.796 "cntlid": 109, 00:16:09.796 "qid": 0, 00:16:09.796 "state": "enabled", 00:16:09.796 "thread": "nvmf_tgt_poll_group_000", 00:16:09.796 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:09.796 "listen_address": { 00:16:09.796 "trtype": "TCP", 00:16:09.796 "adrfam": "IPv4", 00:16:09.796 "traddr": "10.0.0.2", 00:16:09.796 "trsvcid": "4420" 00:16:09.796 }, 00:16:09.796 "peer_address": { 00:16:09.796 "trtype": "TCP", 00:16:09.796 "adrfam": "IPv4", 00:16:09.796 "traddr": "10.0.0.1", 00:16:09.796 "trsvcid": "41232" 00:16:09.796 }, 00:16:09.796 "auth": { 00:16:09.796 "state": "completed", 00:16:09.796 "digest": "sha512", 00:16:09.796 "dhgroup": "ffdhe2048" 00:16:09.796 } 00:16:09.796 } 00:16:09.796 ]' 00:16:09.796 14:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:09.796 14:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:09.796 14:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:10.055 14:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:10.055 14:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:10.055 14:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:10.055 14:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:10.055 14:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:10.313 14:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmIyYWUxYjhlNmQxN2I4ZWEyZTFmNWMxZWQ0OWMxMjRlYjIxODY0MzZhZTJjMzRkY7WfSg==: --dhchap-ctrl-secret DHHC-1:01:MWFlNjI2OTk2N2NmZjJkMjczNGNkMTQxODU1M2RjOTS+229+: 00:16:10.313 14:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YmIyYWUxYjhlNmQxN2I4ZWEyZTFmNWMxZWQ0OWMxMjRlYjIxODY0MzZhZTJjMzRkY7WfSg==: --dhchap-ctrl-secret DHHC-1:01:MWFlNjI2OTk2N2NmZjJkMjczNGNkMTQxODU1M2RjOTS+229+: 00:16:11.246 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:11.246 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:11.246 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:11.246 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.246 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.246 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.246 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:11.246 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:11.246 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:11.503 14:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:16:11.503 14:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:11.503 14:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:11.503 14:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:11.503 14:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:11.503 14:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:11.503 14:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:11.503 14:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.504 14:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.504 14:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.504 14:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:11.504 14:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:11.504 14:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:11.761 00:16:11.761 14:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:11.761 14:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:11.761 14:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:12.019 14:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.019 14:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:12.019 14:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.019 14:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.019 14:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.019 14:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:12.019 { 00:16:12.019 "cntlid": 111, 00:16:12.019 "qid": 0, 00:16:12.019 "state": "enabled", 00:16:12.019 "thread": "nvmf_tgt_poll_group_000", 00:16:12.019 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:12.019 "listen_address": { 00:16:12.019 "trtype": "TCP", 00:16:12.019 "adrfam": "IPv4", 00:16:12.019 "traddr": "10.0.0.2", 00:16:12.019 "trsvcid": "4420" 00:16:12.019 }, 00:16:12.019 "peer_address": { 00:16:12.019 "trtype": "TCP", 00:16:12.019 "adrfam": "IPv4", 00:16:12.019 "traddr": "10.0.0.1", 00:16:12.019 "trsvcid": "41266" 00:16:12.019 }, 00:16:12.019 "auth": { 00:16:12.019 "state": "completed", 00:16:12.019 "digest": "sha512", 00:16:12.019 "dhgroup": "ffdhe2048" 00:16:12.019 } 00:16:12.019 } 00:16:12.019 ]' 00:16:12.019 14:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:12.276 14:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:12.276 14:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:12.277 14:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:12.277 14:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:12.277 14:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:12.277 14:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:12.277 14:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:12.533 14:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTIzMzZlOGQzNTY3MmZkMDM2ZmI4NDc3ZWUzYTJmYjc0ZTc1YjA1ODZjYWJjODQ5NmE1MWJhODk5YTk0NzM5Mfa5be0=: 00:16:12.533 14:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZTIzMzZlOGQzNTY3MmZkMDM2ZmI4NDc3ZWUzYTJmYjc0ZTc1YjA1ODZjYWJjODQ5NmE1MWJhODk5YTk0NzM5Mfa5be0=: 00:16:13.465 14:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:13.465 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:13.465 14:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:13.465 14:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.465 14:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.465 14:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.465 14:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:13.465 14:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:13.465 14:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:13.465 14:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:13.723 14:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:16:13.723 14:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:13.723 14:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:13.723 14:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:13.723 14:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:13.723 14:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:13.723 14:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:13.723 14:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.723 14:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.723 14:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.723 14:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:13.723 14:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:13.723 14:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:13.981 00:16:13.981 14:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:13.981 14:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:13.981 14:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:14.239 14:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.239 14:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:14.239 14:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.239 14:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.239 14:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.239 14:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:14.239 { 00:16:14.239 "cntlid": 113, 00:16:14.239 "qid": 0, 00:16:14.239 "state": "enabled", 00:16:14.239 "thread": "nvmf_tgt_poll_group_000", 00:16:14.239 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:14.239 "listen_address": { 00:16:14.239 "trtype": "TCP", 00:16:14.239 "adrfam": "IPv4", 00:16:14.239 "traddr": "10.0.0.2", 00:16:14.239 "trsvcid": "4420" 00:16:14.239 }, 00:16:14.239 "peer_address": { 00:16:14.239 "trtype": "TCP", 00:16:14.239 "adrfam": "IPv4", 00:16:14.239 "traddr": "10.0.0.1", 00:16:14.239 "trsvcid": "41294" 00:16:14.239 }, 00:16:14.239 "auth": { 00:16:14.239 "state": "completed", 00:16:14.239 "digest": "sha512", 00:16:14.239 "dhgroup": "ffdhe3072" 00:16:14.239 } 00:16:14.239 } 00:16:14.239 ]' 00:16:14.239 14:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:14.497 14:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:14.497 14:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:14.497 14:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:14.497 14:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:14.497 14:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:14.497 14:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:14.497 14:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:14.755 14:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDhmZTgyMWRkNDFjNTJhNjYwZTI3YmE5MzNhZmI3ZDJjZmYyNmQ5ODg4Yzk1MmNjJ1aojg==: --dhchap-ctrl-secret DHHC-1:03:MGEwYjdkZWNjY2Y1YmJkYWY2ZGMzZGI1MTZmYmU5YTMyZGRhNTdkNmU5ZjNjZTRhMDU5NjRlMjY5YWY3Y2QxZit99ZI=: 00:16:14.755 14:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZDhmZTgyMWRkNDFjNTJhNjYwZTI3YmE5MzNhZmI3ZDJjZmYyNmQ5ODg4Yzk1MmNjJ1aojg==: --dhchap-ctrl-secret DHHC-1:03:MGEwYjdkZWNjY2Y1YmJkYWY2ZGMzZGI1MTZmYmU5YTMyZGRhNTdkNmU5ZjNjZTRhMDU5NjRlMjY5YWY3Y2QxZit99ZI=: 00:16:15.688 14:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:15.688 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:15.688 14:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:15.688 14:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.688 14:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.688 14:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.688 14:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:15.688 14:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:15.688 14:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:15.946 14:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:16:15.946 14:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:15.946 14:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:15.946 14:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:15.946 14:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:15.946 14:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:15.946 14:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:15.946 14:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.946 14:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.946 14:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.946 14:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:15.946 14:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:15.946 14:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:16.512 00:16:16.512 14:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:16.512 14:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:16.512 14:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:16.770 14:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:16.770 14:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:16.770 14:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.770 14:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.770 14:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.770 14:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:16.770 { 00:16:16.770 "cntlid": 115, 00:16:16.770 "qid": 0, 00:16:16.770 "state": "enabled", 00:16:16.770 "thread": "nvmf_tgt_poll_group_000", 00:16:16.770 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:16.770 "listen_address": { 00:16:16.770 "trtype": "TCP", 00:16:16.770 "adrfam": "IPv4", 00:16:16.770 "traddr": "10.0.0.2", 00:16:16.770 "trsvcid": "4420" 00:16:16.770 }, 00:16:16.770 "peer_address": { 00:16:16.770 "trtype": "TCP", 00:16:16.770 "adrfam": "IPv4", 00:16:16.770 "traddr": "10.0.0.1", 00:16:16.770 "trsvcid": "41326" 00:16:16.770 }, 00:16:16.770 "auth": { 00:16:16.770 "state": "completed", 00:16:16.770 "digest": "sha512", 00:16:16.770 "dhgroup": "ffdhe3072" 00:16:16.770 } 00:16:16.770 } 00:16:16.770 ]' 00:16:16.770 14:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:16.770 14:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:16.770 14:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:16.770 14:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:16.770 14:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:16.770 14:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:16.770 14:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:16.770 14:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:17.028 14:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmFjZjI3MmM1MWEyYTRmNjU4YWExYThlN2Y4ZTg5OTTLxYpf: --dhchap-ctrl-secret DHHC-1:02:OTc2YWE2ZjM5ZWZmODU5NThiZDMyMmM4MDY1YTZhZTBjMWM4MTZmYTgzZjM2YzdhdNA7MQ==: 00:16:17.028 14:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YmFjZjI3MmM1MWEyYTRmNjU4YWExYThlN2Y4ZTg5OTTLxYpf: --dhchap-ctrl-secret DHHC-1:02:OTc2YWE2ZjM5ZWZmODU5NThiZDMyMmM4MDY1YTZhZTBjMWM4MTZmYTgzZjM2YzdhdNA7MQ==: 00:16:17.962 14:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:17.962 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:17.962 14:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:17.962 14:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.962 14:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.962 14:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.962 14:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:17.962 14:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:17.962 14:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:18.220 14:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:16:18.220 14:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:18.220 14:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:18.220 14:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:18.220 14:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:18.220 14:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:18.220 14:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:18.220 14:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.220 14:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.220 14:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.220 14:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:18.220 14:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:18.220 14:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:18.478 00:16:18.736 14:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:18.736 14:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:18.736 14:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:18.993 14:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.993 14:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:18.993 14:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.993 14:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.993 14:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.993 14:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:18.993 { 00:16:18.993 "cntlid": 117, 00:16:18.993 "qid": 0, 00:16:18.993 "state": "enabled", 00:16:18.994 "thread": "nvmf_tgt_poll_group_000", 00:16:18.994 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:18.994 "listen_address": { 00:16:18.994 "trtype": "TCP", 00:16:18.994 "adrfam": "IPv4", 00:16:18.994 "traddr": "10.0.0.2", 00:16:18.994 "trsvcid": "4420" 00:16:18.994 }, 00:16:18.994 "peer_address": { 00:16:18.994 "trtype": "TCP", 00:16:18.994 "adrfam": "IPv4", 00:16:18.994 "traddr": "10.0.0.1", 00:16:18.994 "trsvcid": "41354" 00:16:18.994 }, 00:16:18.994 "auth": { 00:16:18.994 "state": "completed", 00:16:18.994 "digest": "sha512", 00:16:18.994 "dhgroup": "ffdhe3072" 00:16:18.994 } 00:16:18.994 } 00:16:18.994 ]' 00:16:18.994 14:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:18.994 14:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:18.994 14:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:18.994 14:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:18.994 14:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:18.994 14:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:18.994 14:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:18.994 14:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:19.252 14:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmIyYWUxYjhlNmQxN2I4ZWEyZTFmNWMxZWQ0OWMxMjRlYjIxODY0MzZhZTJjMzRkY7WfSg==: --dhchap-ctrl-secret DHHC-1:01:MWFlNjI2OTk2N2NmZjJkMjczNGNkMTQxODU1M2RjOTS+229+: 00:16:19.252 14:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YmIyYWUxYjhlNmQxN2I4ZWEyZTFmNWMxZWQ0OWMxMjRlYjIxODY0MzZhZTJjMzRkY7WfSg==: --dhchap-ctrl-secret DHHC-1:01:MWFlNjI2OTk2N2NmZjJkMjczNGNkMTQxODU1M2RjOTS+229+: 00:16:20.185 14:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:20.185 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:20.186 14:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:20.186 14:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.186 14:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.186 14:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.186 14:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:20.186 14:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:20.186 14:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:20.443 14:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:16:20.444 14:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:20.444 14:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:20.444 14:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:20.444 14:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:20.444 14:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:20.444 14:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:20.444 14:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.444 14:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.444 14:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.444 14:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:20.444 14:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:20.444 14:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:21.009 00:16:21.009 14:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:21.009 14:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:21.009 14:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:21.270 14:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.270 14:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:21.270 14:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.270 14:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.270 14:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.270 14:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:21.270 { 00:16:21.270 "cntlid": 119, 00:16:21.270 "qid": 0, 00:16:21.270 "state": "enabled", 00:16:21.270 "thread": "nvmf_tgt_poll_group_000", 00:16:21.270 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:21.270 "listen_address": { 00:16:21.270 "trtype": "TCP", 00:16:21.270 "adrfam": "IPv4", 00:16:21.270 "traddr": "10.0.0.2", 00:16:21.270 "trsvcid": "4420" 00:16:21.270 }, 00:16:21.270 "peer_address": { 00:16:21.270 "trtype": "TCP", 00:16:21.270 "adrfam": "IPv4", 00:16:21.270 "traddr": "10.0.0.1", 00:16:21.270 "trsvcid": "55360" 00:16:21.270 }, 00:16:21.270 "auth": { 00:16:21.270 "state": "completed", 00:16:21.270 "digest": "sha512", 00:16:21.270 "dhgroup": "ffdhe3072" 00:16:21.270 } 00:16:21.270 } 00:16:21.270 ]' 00:16:21.270 14:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:21.270 14:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:21.270 14:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:21.270 14:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:21.270 14:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:21.270 14:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:21.270 14:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:21.270 14:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:21.528 14:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTIzMzZlOGQzNTY3MmZkMDM2ZmI4NDc3ZWUzYTJmYjc0ZTc1YjA1ODZjYWJjODQ5NmE1MWJhODk5YTk0NzM5Mfa5be0=: 00:16:21.528 14:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZTIzMzZlOGQzNTY3MmZkMDM2ZmI4NDc3ZWUzYTJmYjc0ZTc1YjA1ODZjYWJjODQ5NmE1MWJhODk5YTk0NzM5Mfa5be0=: 00:16:22.467 14:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:22.467 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:22.467 14:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:22.467 14:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.467 14:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.467 14:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.467 14:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:22.467 14:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:22.467 14:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:22.467 14:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:22.725 14:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:16:22.725 14:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:22.725 14:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:22.725 14:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:22.725 14:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:22.725 14:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:22.725 14:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:22.725 14:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.725 14:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.725 14:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.725 14:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:22.725 14:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:22.725 14:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:22.985 00:16:23.246 14:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:23.246 14:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:23.246 14:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:23.505 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.505 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:23.505 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.505 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.505 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.505 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:23.505 { 00:16:23.505 "cntlid": 121, 00:16:23.505 "qid": 0, 00:16:23.505 "state": "enabled", 00:16:23.505 "thread": "nvmf_tgt_poll_group_000", 00:16:23.505 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:23.505 "listen_address": { 00:16:23.505 "trtype": "TCP", 00:16:23.505 "adrfam": "IPv4", 00:16:23.505 "traddr": "10.0.0.2", 00:16:23.505 "trsvcid": "4420" 00:16:23.505 }, 00:16:23.505 "peer_address": { 00:16:23.505 "trtype": "TCP", 00:16:23.505 "adrfam": "IPv4", 00:16:23.505 "traddr": "10.0.0.1", 00:16:23.505 "trsvcid": "55386" 00:16:23.505 }, 00:16:23.505 "auth": { 00:16:23.505 "state": "completed", 00:16:23.505 "digest": "sha512", 00:16:23.505 "dhgroup": "ffdhe4096" 00:16:23.505 } 00:16:23.505 } 00:16:23.505 ]' 00:16:23.505 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:23.505 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:23.505 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:23.505 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:23.505 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:23.505 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:23.505 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:23.505 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:23.764 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDhmZTgyMWRkNDFjNTJhNjYwZTI3YmE5MzNhZmI3ZDJjZmYyNmQ5ODg4Yzk1MmNjJ1aojg==: --dhchap-ctrl-secret DHHC-1:03:MGEwYjdkZWNjY2Y1YmJkYWY2ZGMzZGI1MTZmYmU5YTMyZGRhNTdkNmU5ZjNjZTRhMDU5NjRlMjY5YWY3Y2QxZit99ZI=: 00:16:23.764 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZDhmZTgyMWRkNDFjNTJhNjYwZTI3YmE5MzNhZmI3ZDJjZmYyNmQ5ODg4Yzk1MmNjJ1aojg==: --dhchap-ctrl-secret DHHC-1:03:MGEwYjdkZWNjY2Y1YmJkYWY2ZGMzZGI1MTZmYmU5YTMyZGRhNTdkNmU5ZjNjZTRhMDU5NjRlMjY5YWY3Y2QxZit99ZI=: 00:16:24.703 14:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:24.703 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:24.703 14:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:24.703 14:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.703 14:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.703 14:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.703 14:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:24.703 14:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:24.703 14:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:24.962 14:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:16:24.962 14:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:24.962 14:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:24.962 14:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:24.962 14:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:24.962 14:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:24.962 14:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:24.962 14:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.962 14:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.962 14:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.962 14:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:24.962 14:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:24.962 14:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:25.530 00:16:25.530 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:25.530 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:25.530 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:25.788 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:25.788 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:25.788 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.788 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.788 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.788 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:25.788 { 00:16:25.788 "cntlid": 123, 00:16:25.788 "qid": 0, 00:16:25.788 "state": "enabled", 00:16:25.788 "thread": "nvmf_tgt_poll_group_000", 00:16:25.788 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:25.788 "listen_address": { 00:16:25.788 "trtype": "TCP", 00:16:25.788 "adrfam": "IPv4", 00:16:25.788 "traddr": "10.0.0.2", 00:16:25.788 "trsvcid": "4420" 00:16:25.788 }, 00:16:25.788 "peer_address": { 00:16:25.788 "trtype": "TCP", 00:16:25.788 "adrfam": "IPv4", 00:16:25.788 "traddr": "10.0.0.1", 00:16:25.788 "trsvcid": "55400" 00:16:25.788 }, 00:16:25.788 "auth": { 00:16:25.788 "state": "completed", 00:16:25.788 "digest": "sha512", 00:16:25.788 "dhgroup": "ffdhe4096" 00:16:25.788 } 00:16:25.788 } 00:16:25.788 ]' 00:16:25.788 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:25.788 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:25.788 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:25.788 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:25.788 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:25.788 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:25.788 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:25.788 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:26.046 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmFjZjI3MmM1MWEyYTRmNjU4YWExYThlN2Y4ZTg5OTTLxYpf: --dhchap-ctrl-secret DHHC-1:02:OTc2YWE2ZjM5ZWZmODU5NThiZDMyMmM4MDY1YTZhZTBjMWM4MTZmYTgzZjM2YzdhdNA7MQ==: 00:16:26.046 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YmFjZjI3MmM1MWEyYTRmNjU4YWExYThlN2Y4ZTg5OTTLxYpf: --dhchap-ctrl-secret DHHC-1:02:OTc2YWE2ZjM5ZWZmODU5NThiZDMyMmM4MDY1YTZhZTBjMWM4MTZmYTgzZjM2YzdhdNA7MQ==: 00:16:26.984 14:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:26.984 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:26.984 14:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:26.984 14:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.984 14:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.984 14:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.984 14:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:26.984 14:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:26.984 14:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:27.242 14:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:16:27.242 14:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:27.242 14:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:27.242 14:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:27.242 14:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:27.242 14:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:27.242 14:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:27.242 14:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.242 14:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.242 14:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.242 14:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:27.242 14:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:27.242 14:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:27.500 00:16:27.759 14:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:27.759 14:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:27.759 14:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:28.018 14:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.018 14:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.018 14:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.018 14:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.018 14:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.018 14:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:28.018 { 00:16:28.018 "cntlid": 125, 00:16:28.018 "qid": 0, 00:16:28.018 "state": "enabled", 00:16:28.018 "thread": "nvmf_tgt_poll_group_000", 00:16:28.018 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:28.018 "listen_address": { 00:16:28.018 "trtype": "TCP", 00:16:28.018 "adrfam": "IPv4", 00:16:28.018 "traddr": "10.0.0.2", 00:16:28.018 "trsvcid": "4420" 00:16:28.018 }, 00:16:28.018 "peer_address": { 00:16:28.018 "trtype": "TCP", 00:16:28.018 "adrfam": "IPv4", 00:16:28.018 "traddr": "10.0.0.1", 00:16:28.018 "trsvcid": "55422" 00:16:28.018 }, 00:16:28.018 "auth": { 00:16:28.018 "state": "completed", 00:16:28.018 "digest": "sha512", 00:16:28.018 "dhgroup": "ffdhe4096" 00:16:28.018 } 00:16:28.018 } 00:16:28.018 ]' 00:16:28.018 14:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:28.018 14:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:28.018 14:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:28.018 14:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:28.018 14:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:28.018 14:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.018 14:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.018 14:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:28.276 14:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmIyYWUxYjhlNmQxN2I4ZWEyZTFmNWMxZWQ0OWMxMjRlYjIxODY0MzZhZTJjMzRkY7WfSg==: --dhchap-ctrl-secret DHHC-1:01:MWFlNjI2OTk2N2NmZjJkMjczNGNkMTQxODU1M2RjOTS+229+: 00:16:28.276 14:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YmIyYWUxYjhlNmQxN2I4ZWEyZTFmNWMxZWQ0OWMxMjRlYjIxODY0MzZhZTJjMzRkY7WfSg==: --dhchap-ctrl-secret DHHC-1:01:MWFlNjI2OTk2N2NmZjJkMjczNGNkMTQxODU1M2RjOTS+229+: 00:16:29.211 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.211 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.211 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:29.211 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.211 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.211 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.211 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:29.211 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:29.211 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:29.469 14:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:16:29.469 14:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:29.469 14:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:29.469 14:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:29.469 14:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:29.469 14:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.469 14:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:29.469 14:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.469 14:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.469 14:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.469 14:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:29.469 14:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:29.469 14:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:30.045 00:16:30.045 14:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:30.045 14:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:30.045 14:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:30.045 14:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.045 14:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:30.045 14:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.045 14:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.045 14:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.045 14:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:30.045 { 00:16:30.045 "cntlid": 127, 00:16:30.045 "qid": 0, 00:16:30.045 "state": "enabled", 00:16:30.045 "thread": "nvmf_tgt_poll_group_000", 00:16:30.045 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:30.045 "listen_address": { 00:16:30.045 "trtype": "TCP", 00:16:30.045 "adrfam": "IPv4", 00:16:30.045 "traddr": "10.0.0.2", 00:16:30.045 "trsvcid": "4420" 00:16:30.045 }, 00:16:30.045 "peer_address": { 00:16:30.045 "trtype": "TCP", 00:16:30.045 "adrfam": "IPv4", 00:16:30.045 "traddr": "10.0.0.1", 00:16:30.045 "trsvcid": "33092" 00:16:30.045 }, 00:16:30.045 "auth": { 00:16:30.045 "state": "completed", 00:16:30.045 "digest": "sha512", 00:16:30.045 "dhgroup": "ffdhe4096" 00:16:30.045 } 00:16:30.045 } 00:16:30.045 ]' 00:16:30.045 14:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:30.348 14:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:30.348 14:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:30.348 14:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:30.348 14:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:30.348 14:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.348 14:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.349 14:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.634 14:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTIzMzZlOGQzNTY3MmZkMDM2ZmI4NDc3ZWUzYTJmYjc0ZTc1YjA1ODZjYWJjODQ5NmE1MWJhODk5YTk0NzM5Mfa5be0=: 00:16:30.634 14:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZTIzMzZlOGQzNTY3MmZkMDM2ZmI4NDc3ZWUzYTJmYjc0ZTc1YjA1ODZjYWJjODQ5NmE1MWJhODk5YTk0NzM5Mfa5be0=: 00:16:31.570 14:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:31.570 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:31.570 14:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:31.570 14:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.570 14:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.570 14:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.570 14:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:31.570 14:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:31.570 14:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:31.570 14:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:31.828 14:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:16:31.828 14:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:31.828 14:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:31.828 14:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:31.828 14:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:31.828 14:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:31.828 14:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:31.828 14:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.828 14:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.828 14:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.828 14:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:31.829 14:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:31.829 14:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:32.396 00:16:32.396 14:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:32.396 14:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:32.396 14:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:32.654 14:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:32.654 14:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:32.654 14:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.654 14:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.654 14:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.654 14:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:32.654 { 00:16:32.654 "cntlid": 129, 00:16:32.654 "qid": 0, 00:16:32.654 "state": "enabled", 00:16:32.654 "thread": "nvmf_tgt_poll_group_000", 00:16:32.654 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:32.654 "listen_address": { 00:16:32.654 "trtype": "TCP", 00:16:32.654 "adrfam": "IPv4", 00:16:32.654 "traddr": "10.0.0.2", 00:16:32.654 "trsvcid": "4420" 00:16:32.654 }, 00:16:32.654 "peer_address": { 00:16:32.654 "trtype": "TCP", 00:16:32.654 "adrfam": "IPv4", 00:16:32.654 "traddr": "10.0.0.1", 00:16:32.654 "trsvcid": "33116" 00:16:32.654 }, 00:16:32.654 "auth": { 00:16:32.654 "state": "completed", 00:16:32.654 "digest": "sha512", 00:16:32.654 "dhgroup": "ffdhe6144" 00:16:32.654 } 00:16:32.654 } 00:16:32.654 ]' 00:16:32.654 14:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:32.654 14:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:32.654 14:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:32.654 14:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:32.654 14:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:32.654 14:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:32.654 14:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:32.654 14:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:32.913 14:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDhmZTgyMWRkNDFjNTJhNjYwZTI3YmE5MzNhZmI3ZDJjZmYyNmQ5ODg4Yzk1MmNjJ1aojg==: --dhchap-ctrl-secret DHHC-1:03:MGEwYjdkZWNjY2Y1YmJkYWY2ZGMzZGI1MTZmYmU5YTMyZGRhNTdkNmU5ZjNjZTRhMDU5NjRlMjY5YWY3Y2QxZit99ZI=: 00:16:32.913 14:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZDhmZTgyMWRkNDFjNTJhNjYwZTI3YmE5MzNhZmI3ZDJjZmYyNmQ5ODg4Yzk1MmNjJ1aojg==: --dhchap-ctrl-secret DHHC-1:03:MGEwYjdkZWNjY2Y1YmJkYWY2ZGMzZGI1MTZmYmU5YTMyZGRhNTdkNmU5ZjNjZTRhMDU5NjRlMjY5YWY3Y2QxZit99ZI=: 00:16:33.851 14:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:33.851 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:33.851 14:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:33.851 14:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.851 14:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.851 14:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.851 14:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:33.851 14:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:33.851 14:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:34.109 14:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:16:34.109 14:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:34.109 14:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:34.109 14:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:34.109 14:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:34.109 14:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:34.109 14:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.109 14:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.109 14:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.109 14:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.110 14:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.110 14:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.110 14:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.679 00:16:34.679 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:34.679 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:34.679 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.937 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.937 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.937 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.937 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.937 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.937 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:34.937 { 00:16:34.937 "cntlid": 131, 00:16:34.937 "qid": 0, 00:16:34.937 "state": "enabled", 00:16:34.937 "thread": "nvmf_tgt_poll_group_000", 00:16:34.937 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:34.937 "listen_address": { 00:16:34.937 "trtype": "TCP", 00:16:34.937 "adrfam": "IPv4", 00:16:34.937 "traddr": "10.0.0.2", 00:16:34.937 "trsvcid": "4420" 00:16:34.937 }, 00:16:34.937 "peer_address": { 00:16:34.937 "trtype": "TCP", 00:16:34.937 "adrfam": "IPv4", 00:16:34.937 "traddr": "10.0.0.1", 00:16:34.937 "trsvcid": "33142" 00:16:34.937 }, 00:16:34.937 "auth": { 00:16:34.937 "state": "completed", 00:16:34.937 "digest": "sha512", 00:16:34.937 "dhgroup": "ffdhe6144" 00:16:34.937 } 00:16:34.937 } 00:16:34.937 ]' 00:16:34.937 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:34.937 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:34.937 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:34.937 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:34.937 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:35.195 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:35.195 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:35.195 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:35.453 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmFjZjI3MmM1MWEyYTRmNjU4YWExYThlN2Y4ZTg5OTTLxYpf: --dhchap-ctrl-secret DHHC-1:02:OTc2YWE2ZjM5ZWZmODU5NThiZDMyMmM4MDY1YTZhZTBjMWM4MTZmYTgzZjM2YzdhdNA7MQ==: 00:16:35.453 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YmFjZjI3MmM1MWEyYTRmNjU4YWExYThlN2Y4ZTg5OTTLxYpf: --dhchap-ctrl-secret DHHC-1:02:OTc2YWE2ZjM5ZWZmODU5NThiZDMyMmM4MDY1YTZhZTBjMWM4MTZmYTgzZjM2YzdhdNA7MQ==: 00:16:36.387 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:36.387 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:36.387 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:36.387 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.387 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.387 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.387 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:36.387 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:36.387 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:36.645 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:16:36.645 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:36.645 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:36.645 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:36.645 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:36.645 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:36.645 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.646 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.646 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.646 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.646 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.646 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.646 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:37.214 00:16:37.214 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:37.214 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:37.214 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:37.472 14:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.472 14:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:37.472 14:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.472 14:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.472 14:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.472 14:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:37.472 { 00:16:37.472 "cntlid": 133, 00:16:37.472 "qid": 0, 00:16:37.472 "state": "enabled", 00:16:37.472 "thread": "nvmf_tgt_poll_group_000", 00:16:37.472 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:37.472 "listen_address": { 00:16:37.472 "trtype": "TCP", 00:16:37.472 "adrfam": "IPv4", 00:16:37.472 "traddr": "10.0.0.2", 00:16:37.472 "trsvcid": "4420" 00:16:37.472 }, 00:16:37.472 "peer_address": { 00:16:37.472 "trtype": "TCP", 00:16:37.472 "adrfam": "IPv4", 00:16:37.472 "traddr": "10.0.0.1", 00:16:37.472 "trsvcid": "33176" 00:16:37.472 }, 00:16:37.472 "auth": { 00:16:37.472 "state": "completed", 00:16:37.472 "digest": "sha512", 00:16:37.472 "dhgroup": "ffdhe6144" 00:16:37.472 } 00:16:37.472 } 00:16:37.472 ]' 00:16:37.472 14:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:37.472 14:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:37.472 14:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:37.472 14:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:37.472 14:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:37.472 14:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:37.472 14:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:37.472 14:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:37.730 14:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmIyYWUxYjhlNmQxN2I4ZWEyZTFmNWMxZWQ0OWMxMjRlYjIxODY0MzZhZTJjMzRkY7WfSg==: --dhchap-ctrl-secret DHHC-1:01:MWFlNjI2OTk2N2NmZjJkMjczNGNkMTQxODU1M2RjOTS+229+: 00:16:37.730 14:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YmIyYWUxYjhlNmQxN2I4ZWEyZTFmNWMxZWQ0OWMxMjRlYjIxODY0MzZhZTJjMzRkY7WfSg==: --dhchap-ctrl-secret DHHC-1:01:MWFlNjI2OTk2N2NmZjJkMjczNGNkMTQxODU1M2RjOTS+229+: 00:16:38.664 14:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.664 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.664 14:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:38.664 14:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.664 14:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.664 14:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.664 14:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:38.664 14:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:38.664 14:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:38.922 14:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:16:38.922 14:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:38.922 14:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:38.922 14:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:38.922 14:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:38.922 14:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:38.922 14:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:38.922 14:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.922 14:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.922 14:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.922 14:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:38.922 14:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:38.922 14:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:39.491 00:16:39.491 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:39.491 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:39.491 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:39.749 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.749 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:39.749 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.749 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.749 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.749 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:39.749 { 00:16:39.749 "cntlid": 135, 00:16:39.749 "qid": 0, 00:16:39.749 "state": "enabled", 00:16:39.749 "thread": "nvmf_tgt_poll_group_000", 00:16:39.749 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:39.749 "listen_address": { 00:16:39.749 "trtype": "TCP", 00:16:39.749 "adrfam": "IPv4", 00:16:39.749 "traddr": "10.0.0.2", 00:16:39.749 "trsvcid": "4420" 00:16:39.749 }, 00:16:39.749 "peer_address": { 00:16:39.749 "trtype": "TCP", 00:16:39.749 "adrfam": "IPv4", 00:16:39.749 "traddr": "10.0.0.1", 00:16:39.749 "trsvcid": "33208" 00:16:39.749 }, 00:16:39.749 "auth": { 00:16:39.749 "state": "completed", 00:16:39.749 "digest": "sha512", 00:16:39.749 "dhgroup": "ffdhe6144" 00:16:39.749 } 00:16:39.749 } 00:16:39.749 ]' 00:16:39.749 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:39.750 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:39.750 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:39.750 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:39.750 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:39.750 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.750 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.750 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:40.319 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTIzMzZlOGQzNTY3MmZkMDM2ZmI4NDc3ZWUzYTJmYjc0ZTc1YjA1ODZjYWJjODQ5NmE1MWJhODk5YTk0NzM5Mfa5be0=: 00:16:40.319 14:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZTIzMzZlOGQzNTY3MmZkMDM2ZmI4NDc3ZWUzYTJmYjc0ZTc1YjA1ODZjYWJjODQ5NmE1MWJhODk5YTk0NzM5Mfa5be0=: 00:16:41.256 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.256 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.256 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:41.256 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.256 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.256 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.256 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:41.256 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:41.256 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:41.256 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:41.515 14:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:16:41.515 14:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:41.515 14:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:41.515 14:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:41.515 14:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:41.515 14:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:41.515 14:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:41.515 14:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.515 14:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.515 14:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.515 14:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:41.515 14:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:41.515 14:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:42.450 00:16:42.450 14:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:42.450 14:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:42.450 14:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.450 14:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.450 14:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.450 14:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.450 14:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.450 14:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.450 14:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:42.450 { 00:16:42.450 "cntlid": 137, 00:16:42.450 "qid": 0, 00:16:42.450 "state": "enabled", 00:16:42.450 "thread": "nvmf_tgt_poll_group_000", 00:16:42.450 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:42.450 "listen_address": { 00:16:42.450 "trtype": "TCP", 00:16:42.450 "adrfam": "IPv4", 00:16:42.450 "traddr": "10.0.0.2", 00:16:42.450 "trsvcid": "4420" 00:16:42.450 }, 00:16:42.450 "peer_address": { 00:16:42.450 "trtype": "TCP", 00:16:42.450 "adrfam": "IPv4", 00:16:42.450 "traddr": "10.0.0.1", 00:16:42.450 "trsvcid": "40080" 00:16:42.450 }, 00:16:42.450 "auth": { 00:16:42.450 "state": "completed", 00:16:42.450 "digest": "sha512", 00:16:42.450 "dhgroup": "ffdhe8192" 00:16:42.450 } 00:16:42.450 } 00:16:42.450 ]' 00:16:42.450 14:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:42.708 14:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:42.708 14:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:42.708 14:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:42.708 14:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:42.708 14:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.708 14:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.708 14:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.966 14:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDhmZTgyMWRkNDFjNTJhNjYwZTI3YmE5MzNhZmI3ZDJjZmYyNmQ5ODg4Yzk1MmNjJ1aojg==: --dhchap-ctrl-secret DHHC-1:03:MGEwYjdkZWNjY2Y1YmJkYWY2ZGMzZGI1MTZmYmU5YTMyZGRhNTdkNmU5ZjNjZTRhMDU5NjRlMjY5YWY3Y2QxZit99ZI=: 00:16:42.966 14:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZDhmZTgyMWRkNDFjNTJhNjYwZTI3YmE5MzNhZmI3ZDJjZmYyNmQ5ODg4Yzk1MmNjJ1aojg==: --dhchap-ctrl-secret DHHC-1:03:MGEwYjdkZWNjY2Y1YmJkYWY2ZGMzZGI1MTZmYmU5YTMyZGRhNTdkNmU5ZjNjZTRhMDU5NjRlMjY5YWY3Y2QxZit99ZI=: 00:16:43.902 14:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.902 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.902 14:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:43.902 14:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.902 14:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.902 14:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.902 14:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:43.902 14:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:43.902 14:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:44.161 14:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:16:44.161 14:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:44.161 14:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:44.161 14:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:44.161 14:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:44.161 14:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:44.161 14:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:44.161 14:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.161 14:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.161 14:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.161 14:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:44.161 14:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:44.161 14:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:45.097 00:16:45.097 14:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:45.097 14:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:45.097 14:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.355 14:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.355 14:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.355 14:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.355 14:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.355 14:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.355 14:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:45.355 { 00:16:45.355 "cntlid": 139, 00:16:45.355 "qid": 0, 00:16:45.355 "state": "enabled", 00:16:45.355 "thread": "nvmf_tgt_poll_group_000", 00:16:45.355 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:45.355 "listen_address": { 00:16:45.355 "trtype": "TCP", 00:16:45.355 "adrfam": "IPv4", 00:16:45.355 "traddr": "10.0.0.2", 00:16:45.355 "trsvcid": "4420" 00:16:45.355 }, 00:16:45.355 "peer_address": { 00:16:45.355 "trtype": "TCP", 00:16:45.355 "adrfam": "IPv4", 00:16:45.355 "traddr": "10.0.0.1", 00:16:45.355 "trsvcid": "40102" 00:16:45.355 }, 00:16:45.355 "auth": { 00:16:45.355 "state": "completed", 00:16:45.355 "digest": "sha512", 00:16:45.355 "dhgroup": "ffdhe8192" 00:16:45.355 } 00:16:45.355 } 00:16:45.355 ]' 00:16:45.355 14:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:45.355 14:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:45.355 14:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:45.355 14:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:45.355 14:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:45.355 14:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.355 14:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.355 14:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.613 14:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmFjZjI3MmM1MWEyYTRmNjU4YWExYThlN2Y4ZTg5OTTLxYpf: --dhchap-ctrl-secret DHHC-1:02:OTc2YWE2ZjM5ZWZmODU5NThiZDMyMmM4MDY1YTZhZTBjMWM4MTZmYTgzZjM2YzdhdNA7MQ==: 00:16:45.613 14:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:YmFjZjI3MmM1MWEyYTRmNjU4YWExYThlN2Y4ZTg5OTTLxYpf: --dhchap-ctrl-secret DHHC-1:02:OTc2YWE2ZjM5ZWZmODU5NThiZDMyMmM4MDY1YTZhZTBjMWM4MTZmYTgzZjM2YzdhdNA7MQ==: 00:16:46.549 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.549 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.549 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:46.549 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.549 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.549 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.549 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:46.549 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:46.549 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:46.807 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:16:46.807 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:46.807 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:46.807 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:46.807 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:46.807 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.807 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:46.807 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.807 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.807 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.807 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:46.807 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:46.807 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:47.746 00:16:47.746 14:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:47.746 14:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:47.746 14:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:48.004 14:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.004 14:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:48.004 14:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.004 14:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.004 14:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.004 14:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:48.004 { 00:16:48.004 "cntlid": 141, 00:16:48.004 "qid": 0, 00:16:48.004 "state": "enabled", 00:16:48.004 "thread": "nvmf_tgt_poll_group_000", 00:16:48.004 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:48.004 "listen_address": { 00:16:48.004 "trtype": "TCP", 00:16:48.004 "adrfam": "IPv4", 00:16:48.004 "traddr": "10.0.0.2", 00:16:48.004 "trsvcid": "4420" 00:16:48.004 }, 00:16:48.004 "peer_address": { 00:16:48.004 "trtype": "TCP", 00:16:48.004 "adrfam": "IPv4", 00:16:48.004 "traddr": "10.0.0.1", 00:16:48.004 "trsvcid": "40124" 00:16:48.004 }, 00:16:48.004 "auth": { 00:16:48.004 "state": "completed", 00:16:48.004 "digest": "sha512", 00:16:48.004 "dhgroup": "ffdhe8192" 00:16:48.004 } 00:16:48.004 } 00:16:48.004 ]' 00:16:48.004 14:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:48.004 14:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:48.004 14:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:48.004 14:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:48.004 14:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:48.004 14:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:48.004 14:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:48.004 14:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.569 14:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YmIyYWUxYjhlNmQxN2I4ZWEyZTFmNWMxZWQ0OWMxMjRlYjIxODY0MzZhZTJjMzRkY7WfSg==: --dhchap-ctrl-secret DHHC-1:01:MWFlNjI2OTk2N2NmZjJkMjczNGNkMTQxODU1M2RjOTS+229+: 00:16:48.570 14:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YmIyYWUxYjhlNmQxN2I4ZWEyZTFmNWMxZWQ0OWMxMjRlYjIxODY0MzZhZTJjMzRkY7WfSg==: --dhchap-ctrl-secret DHHC-1:01:MWFlNjI2OTk2N2NmZjJkMjczNGNkMTQxODU1M2RjOTS+229+: 00:16:49.507 14:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.507 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.507 14:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:49.507 14:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.507 14:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.507 14:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.507 14:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:49.507 14:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:49.507 14:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:49.765 14:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:16:49.765 14:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:49.765 14:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:49.765 14:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:49.765 14:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:49.765 14:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:49.765 14:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:49.765 14:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.765 14:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.765 14:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.765 14:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:49.765 14:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:49.765 14:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:50.705 00:16:50.705 14:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:50.705 14:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:50.705 14:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:50.705 14:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.705 14:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:50.705 14:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.705 14:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.705 14:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.705 14:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:50.705 { 00:16:50.705 "cntlid": 143, 00:16:50.705 "qid": 0, 00:16:50.705 "state": "enabled", 00:16:50.705 "thread": "nvmf_tgt_poll_group_000", 00:16:50.705 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:50.705 "listen_address": { 00:16:50.705 "trtype": "TCP", 00:16:50.705 "adrfam": "IPv4", 00:16:50.705 "traddr": "10.0.0.2", 00:16:50.705 "trsvcid": "4420" 00:16:50.705 }, 00:16:50.705 "peer_address": { 00:16:50.705 "trtype": "TCP", 00:16:50.705 "adrfam": "IPv4", 00:16:50.705 "traddr": "10.0.0.1", 00:16:50.705 "trsvcid": "38834" 00:16:50.705 }, 00:16:50.705 "auth": { 00:16:50.705 "state": "completed", 00:16:50.705 "digest": "sha512", 00:16:50.705 "dhgroup": "ffdhe8192" 00:16:50.705 } 00:16:50.705 } 00:16:50.705 ]' 00:16:50.705 14:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:50.963 14:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:50.963 14:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:50.963 14:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:50.963 14:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:50.963 14:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:50.963 14:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:50.963 14:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.221 14:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTIzMzZlOGQzNTY3MmZkMDM2ZmI4NDc3ZWUzYTJmYjc0ZTc1YjA1ODZjYWJjODQ5NmE1MWJhODk5YTk0NzM5Mfa5be0=: 00:16:51.221 14:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZTIzMzZlOGQzNTY3MmZkMDM2ZmI4NDc3ZWUzYTJmYjc0ZTc1YjA1ODZjYWJjODQ5NmE1MWJhODk5YTk0NzM5Mfa5be0=: 00:16:52.159 14:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:52.159 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:52.159 14:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:52.159 14:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.159 14:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.160 14:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.160 14:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:16:52.160 14:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:16:52.160 14:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:16:52.160 14:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:52.160 14:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:52.160 14:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:52.420 14:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:16:52.420 14:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:52.420 14:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:52.420 14:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:52.420 14:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:52.420 14:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.420 14:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:52.420 14:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.420 14:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.420 14:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.420 14:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:52.420 14:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:52.420 14:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.360 00:16:53.360 14:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:53.360 14:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:53.360 14:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:53.619 14:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.619 14:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:53.619 14:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.619 14:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.619 14:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.619 14:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:53.619 { 00:16:53.619 "cntlid": 145, 00:16:53.619 "qid": 0, 00:16:53.619 "state": "enabled", 00:16:53.619 "thread": "nvmf_tgt_poll_group_000", 00:16:53.619 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:53.619 "listen_address": { 00:16:53.619 "trtype": "TCP", 00:16:53.619 "adrfam": "IPv4", 00:16:53.619 "traddr": "10.0.0.2", 00:16:53.619 "trsvcid": "4420" 00:16:53.619 }, 00:16:53.619 "peer_address": { 00:16:53.619 "trtype": "TCP", 00:16:53.619 "adrfam": "IPv4", 00:16:53.619 "traddr": "10.0.0.1", 00:16:53.619 "trsvcid": "38846" 00:16:53.619 }, 00:16:53.619 "auth": { 00:16:53.619 "state": "completed", 00:16:53.619 "digest": "sha512", 00:16:53.619 "dhgroup": "ffdhe8192" 00:16:53.619 } 00:16:53.619 } 00:16:53.619 ]' 00:16:53.619 14:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:53.619 14:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:53.619 14:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:53.619 14:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:53.619 14:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:53.619 14:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:53.619 14:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:53.619 14:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.902 14:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDhmZTgyMWRkNDFjNTJhNjYwZTI3YmE5MzNhZmI3ZDJjZmYyNmQ5ODg4Yzk1MmNjJ1aojg==: --dhchap-ctrl-secret DHHC-1:03:MGEwYjdkZWNjY2Y1YmJkYWY2ZGMzZGI1MTZmYmU5YTMyZGRhNTdkNmU5ZjNjZTRhMDU5NjRlMjY5YWY3Y2QxZit99ZI=: 00:16:53.902 14:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:ZDhmZTgyMWRkNDFjNTJhNjYwZTI3YmE5MzNhZmI3ZDJjZmYyNmQ5ODg4Yzk1MmNjJ1aojg==: --dhchap-ctrl-secret DHHC-1:03:MGEwYjdkZWNjY2Y1YmJkYWY2ZGMzZGI1MTZmYmU5YTMyZGRhNTdkNmU5ZjNjZTRhMDU5NjRlMjY5YWY3Y2QxZit99ZI=: 00:16:54.838 14:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:54.838 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:54.838 14:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:54.838 14:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.838 14:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.838 14:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.838 14:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:16:54.838 14:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.838 14:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.838 14:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.838 14:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:16:54.838 14:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:54.838 14:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:16:54.838 14:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:54.838 14:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:54.838 14:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:54.838 14:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:54.838 14:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:16:54.838 14:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:16:54.838 14:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:16:55.772 request: 00:16:55.772 { 00:16:55.772 "name": "nvme0", 00:16:55.772 "trtype": "tcp", 00:16:55.772 "traddr": "10.0.0.2", 00:16:55.772 "adrfam": "ipv4", 00:16:55.772 "trsvcid": "4420", 00:16:55.772 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:55.772 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:55.772 "prchk_reftag": false, 00:16:55.772 "prchk_guard": false, 00:16:55.772 "hdgst": false, 00:16:55.772 "ddgst": false, 00:16:55.772 "dhchap_key": "key2", 00:16:55.772 "allow_unrecognized_csi": false, 00:16:55.772 "method": "bdev_nvme_attach_controller", 00:16:55.772 "req_id": 1 00:16:55.772 } 00:16:55.772 Got JSON-RPC error response 00:16:55.772 response: 00:16:55.772 { 00:16:55.772 "code": -5, 00:16:55.772 "message": "Input/output error" 00:16:55.772 } 00:16:55.772 14:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:55.772 14:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:55.772 14:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:55.772 14:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:55.772 14:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:55.772 14:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.772 14:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.772 14:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.772 14:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:55.772 14:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.772 14:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.772 14:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.772 14:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:55.772 14:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:55.772 14:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:55.772 14:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:55.772 14:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:55.772 14:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:55.772 14:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:55.772 14:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:55.772 14:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:55.772 14:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:56.711 request: 00:16:56.711 { 00:16:56.711 "name": "nvme0", 00:16:56.711 "trtype": "tcp", 00:16:56.711 "traddr": "10.0.0.2", 00:16:56.711 "adrfam": "ipv4", 00:16:56.711 "trsvcid": "4420", 00:16:56.711 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:56.711 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:56.711 "prchk_reftag": false, 00:16:56.711 "prchk_guard": false, 00:16:56.711 "hdgst": false, 00:16:56.711 "ddgst": false, 00:16:56.711 "dhchap_key": "key1", 00:16:56.711 "dhchap_ctrlr_key": "ckey2", 00:16:56.711 "allow_unrecognized_csi": false, 00:16:56.711 "method": "bdev_nvme_attach_controller", 00:16:56.711 "req_id": 1 00:16:56.711 } 00:16:56.711 Got JSON-RPC error response 00:16:56.711 response: 00:16:56.711 { 00:16:56.711 "code": -5, 00:16:56.711 "message": "Input/output error" 00:16:56.711 } 00:16:56.711 14:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:56.711 14:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:56.711 14:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:56.711 14:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:56.711 14:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:56.711 14:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.711 14:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.711 14:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.711 14:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:16:56.711 14:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.711 14:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.711 14:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.711 14:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:56.711 14:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:56.711 14:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:56.711 14:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:56.711 14:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:56.711 14:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:56.711 14:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:56.711 14:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:56.711 14:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:56.711 14:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.277 request: 00:16:57.277 { 00:16:57.277 "name": "nvme0", 00:16:57.277 "trtype": "tcp", 00:16:57.277 "traddr": "10.0.0.2", 00:16:57.277 "adrfam": "ipv4", 00:16:57.277 "trsvcid": "4420", 00:16:57.277 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:57.277 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:57.277 "prchk_reftag": false, 00:16:57.277 "prchk_guard": false, 00:16:57.277 "hdgst": false, 00:16:57.277 "ddgst": false, 00:16:57.277 "dhchap_key": "key1", 00:16:57.277 "dhchap_ctrlr_key": "ckey1", 00:16:57.277 "allow_unrecognized_csi": false, 00:16:57.277 "method": "bdev_nvme_attach_controller", 00:16:57.277 "req_id": 1 00:16:57.277 } 00:16:57.277 Got JSON-RPC error response 00:16:57.277 response: 00:16:57.277 { 00:16:57.277 "code": -5, 00:16:57.278 "message": "Input/output error" 00:16:57.278 } 00:16:57.537 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:57.537 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:57.537 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:57.537 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:57.537 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:57.537 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.537 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.537 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.537 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 656343 00:16:57.537 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 656343 ']' 00:16:57.537 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 656343 00:16:57.537 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:16:57.537 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:57.537 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 656343 00:16:57.537 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:57.537 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:57.537 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 656343' 00:16:57.537 killing process with pid 656343 00:16:57.537 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 656343 00:16:57.537 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 656343 00:16:57.796 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:16:57.796 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:57.796 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:57.796 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.796 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=679025 00:16:57.796 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:16:57.796 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 679025 00:16:57.796 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 679025 ']' 00:16:57.796 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:57.796 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:57.796 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:57.796 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:57.796 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.054 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:58.054 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:58.054 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:58.054 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:58.054 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.054 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:58.054 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:58.054 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 679025 00:16:58.054 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 679025 ']' 00:16:58.054 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:58.054 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:58.054 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:58.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:58.054 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:58.054 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.312 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:58.312 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:58.312 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:16:58.312 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.312 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.312 null0 00:16:58.570 14:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.570 14:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:16:58.570 14:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.qp2 00:16:58.570 14:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.570 14:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.570 14:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.570 14:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.jtg ]] 00:16:58.570 14:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.jtg 00:16:58.570 14:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.570 14:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.570 14:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.570 14:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:16:58.570 14:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.5t5 00:16:58.570 14:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.570 14:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.571 14:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.571 14:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.me4 ]] 00:16:58.571 14:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.me4 00:16:58.571 14:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.571 14:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.571 14:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.571 14:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:16:58.571 14:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Fc4 00:16:58.571 14:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.571 14:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.571 14:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.571 14:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.UwE ]] 00:16:58.571 14:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.UwE 00:16:58.571 14:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.571 14:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.571 14:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.571 14:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:16:58.571 14:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.LbG 00:16:58.571 14:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.571 14:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.571 14:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.571 14:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:16:58.571 14:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:16:58.571 14:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:58.571 14:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:58.571 14:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:58.571 14:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:58.571 14:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:58.571 14:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:58.571 14:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.571 14:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.571 14:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.571 14:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:58.571 14:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:58.571 14:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:59.950 nvme0n1 00:16:59.950 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:59.950 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:59.950 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.208 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.208 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.208 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.208 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.208 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.208 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:00.208 { 00:17:00.208 "cntlid": 1, 00:17:00.208 "qid": 0, 00:17:00.208 "state": "enabled", 00:17:00.208 "thread": "nvmf_tgt_poll_group_000", 00:17:00.208 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:00.208 "listen_address": { 00:17:00.208 "trtype": "TCP", 00:17:00.208 "adrfam": "IPv4", 00:17:00.208 "traddr": "10.0.0.2", 00:17:00.208 "trsvcid": "4420" 00:17:00.208 }, 00:17:00.208 "peer_address": { 00:17:00.208 "trtype": "TCP", 00:17:00.208 "adrfam": "IPv4", 00:17:00.208 "traddr": "10.0.0.1", 00:17:00.208 "trsvcid": "38896" 00:17:00.208 }, 00:17:00.208 "auth": { 00:17:00.208 "state": "completed", 00:17:00.208 "digest": "sha512", 00:17:00.208 "dhgroup": "ffdhe8192" 00:17:00.208 } 00:17:00.208 } 00:17:00.208 ]' 00:17:00.208 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:00.208 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:00.208 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:00.208 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:00.208 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:00.208 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:00.208 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.208 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:00.468 14:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTIzMzZlOGQzNTY3MmZkMDM2ZmI4NDc3ZWUzYTJmYjc0ZTc1YjA1ODZjYWJjODQ5NmE1MWJhODk5YTk0NzM5Mfa5be0=: 00:17:00.468 14:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:ZTIzMzZlOGQzNTY3MmZkMDM2ZmI4NDc3ZWUzYTJmYjc0ZTc1YjA1ODZjYWJjODQ5NmE1MWJhODk5YTk0NzM5Mfa5be0=: 00:17:01.404 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:01.404 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:01.404 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:01.404 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.404 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.404 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.404 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:17:01.404 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.404 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.404 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.404 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:17:01.404 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:17:01.971 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:17:01.971 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:01.971 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:17:01.971 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:01.971 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:01.971 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:01.971 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:01.971 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:01.971 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:01.971 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:02.229 request: 00:17:02.229 { 00:17:02.229 "name": "nvme0", 00:17:02.229 "trtype": "tcp", 00:17:02.229 "traddr": "10.0.0.2", 00:17:02.229 "adrfam": "ipv4", 00:17:02.229 "trsvcid": "4420", 00:17:02.229 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:02.229 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:02.229 "prchk_reftag": false, 00:17:02.229 "prchk_guard": false, 00:17:02.229 "hdgst": false, 00:17:02.229 "ddgst": false, 00:17:02.229 "dhchap_key": "key3", 00:17:02.229 "allow_unrecognized_csi": false, 00:17:02.229 "method": "bdev_nvme_attach_controller", 00:17:02.229 "req_id": 1 00:17:02.229 } 00:17:02.229 Got JSON-RPC error response 00:17:02.229 response: 00:17:02.229 { 00:17:02.229 "code": -5, 00:17:02.229 "message": "Input/output error" 00:17:02.229 } 00:17:02.229 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:02.229 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:02.229 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:02.230 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:02.230 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:17:02.230 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:17:02.230 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:02.230 14:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:02.488 14:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:17:02.488 14:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:02.488 14:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:17:02.488 14:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:02.488 14:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:02.488 14:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:02.488 14:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:02.488 14:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:02.488 14:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:02.488 14:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:02.747 request: 00:17:02.747 { 00:17:02.747 "name": "nvme0", 00:17:02.747 "trtype": "tcp", 00:17:02.747 "traddr": "10.0.0.2", 00:17:02.747 "adrfam": "ipv4", 00:17:02.747 "trsvcid": "4420", 00:17:02.747 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:02.747 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:02.747 "prchk_reftag": false, 00:17:02.747 "prchk_guard": false, 00:17:02.747 "hdgst": false, 00:17:02.747 "ddgst": false, 00:17:02.747 "dhchap_key": "key3", 00:17:02.747 "allow_unrecognized_csi": false, 00:17:02.747 "method": "bdev_nvme_attach_controller", 00:17:02.747 "req_id": 1 00:17:02.747 } 00:17:02.747 Got JSON-RPC error response 00:17:02.747 response: 00:17:02.747 { 00:17:02.747 "code": -5, 00:17:02.747 "message": "Input/output error" 00:17:02.747 } 00:17:02.747 14:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:02.747 14:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:02.747 14:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:02.747 14:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:02.747 14:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:02.747 14:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:17:02.747 14:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:02.747 14:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:02.747 14:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:02.747 14:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:03.005 14:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:03.005 14:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.005 14:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.005 14:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.005 14:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:03.005 14:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.005 14:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.005 14:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.005 14:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:03.005 14:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:03.005 14:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:03.005 14:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:03.264 14:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:03.264 14:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:03.264 14:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:03.264 14:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:03.264 14:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:03.264 14:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:03.834 request: 00:17:03.834 { 00:17:03.834 "name": "nvme0", 00:17:03.834 "trtype": "tcp", 00:17:03.834 "traddr": "10.0.0.2", 00:17:03.834 "adrfam": "ipv4", 00:17:03.834 "trsvcid": "4420", 00:17:03.834 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:03.834 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:03.834 "prchk_reftag": false, 00:17:03.834 "prchk_guard": false, 00:17:03.834 "hdgst": false, 00:17:03.834 "ddgst": false, 00:17:03.834 "dhchap_key": "key0", 00:17:03.834 "dhchap_ctrlr_key": "key1", 00:17:03.834 "allow_unrecognized_csi": false, 00:17:03.834 "method": "bdev_nvme_attach_controller", 00:17:03.834 "req_id": 1 00:17:03.834 } 00:17:03.834 Got JSON-RPC error response 00:17:03.834 response: 00:17:03.834 { 00:17:03.834 "code": -5, 00:17:03.834 "message": "Input/output error" 00:17:03.834 } 00:17:03.834 14:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:03.834 14:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:03.834 14:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:03.834 14:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:03.834 14:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:17:03.834 14:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:03.834 14:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:04.093 nvme0n1 00:17:04.093 14:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:17:04.093 14:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:17:04.093 14:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.351 14:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.351 14:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:04.351 14:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:04.610 14:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:17:04.610 14:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.610 14:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.610 14:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.610 14:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:04.610 14:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:04.610 14:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:05.992 nvme0n1 00:17:05.992 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:17:05.992 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:05.992 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:17:06.250 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.250 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:06.250 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.250 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.250 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.250 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:17:06.250 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:17:06.250 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.508 14:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.508 14:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:YmIyYWUxYjhlNmQxN2I4ZWEyZTFmNWMxZWQ0OWMxMjRlYjIxODY0MzZhZTJjMzRkY7WfSg==: --dhchap-ctrl-secret DHHC-1:03:ZTIzMzZlOGQzNTY3MmZkMDM2ZmI4NDc3ZWUzYTJmYjc0ZTc1YjA1ODZjYWJjODQ5NmE1MWJhODk5YTk0NzM5Mfa5be0=: 00:17:06.508 14:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:YmIyYWUxYjhlNmQxN2I4ZWEyZTFmNWMxZWQ0OWMxMjRlYjIxODY0MzZhZTJjMzRkY7WfSg==: --dhchap-ctrl-secret DHHC-1:03:ZTIzMzZlOGQzNTY3MmZkMDM2ZmI4NDc3ZWUzYTJmYjc0ZTc1YjA1ODZjYWJjODQ5NmE1MWJhODk5YTk0NzM5Mfa5be0=: 00:17:07.446 14:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:17:07.446 14:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:17:07.446 14:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:17:07.446 14:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:17:07.446 14:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:17:07.446 14:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:17:07.446 14:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:17:07.446 14:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.446 14:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.704 14:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:17:07.704 14:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:07.704 14:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:17:07.704 14:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:07.704 14:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:07.704 14:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:07.704 14:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:07.704 14:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:07.704 14:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:07.704 14:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:08.271 request: 00:17:08.271 { 00:17:08.271 "name": "nvme0", 00:17:08.271 "trtype": "tcp", 00:17:08.271 "traddr": "10.0.0.2", 00:17:08.271 "adrfam": "ipv4", 00:17:08.271 "trsvcid": "4420", 00:17:08.271 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:08.271 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:08.271 "prchk_reftag": false, 00:17:08.271 "prchk_guard": false, 00:17:08.271 "hdgst": false, 00:17:08.271 "ddgst": false, 00:17:08.271 "dhchap_key": "key1", 00:17:08.271 "allow_unrecognized_csi": false, 00:17:08.271 "method": "bdev_nvme_attach_controller", 00:17:08.271 "req_id": 1 00:17:08.271 } 00:17:08.271 Got JSON-RPC error response 00:17:08.271 response: 00:17:08.271 { 00:17:08.271 "code": -5, 00:17:08.271 "message": "Input/output error" 00:17:08.271 } 00:17:08.271 14:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:08.271 14:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:08.271 14:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:08.271 14:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:08.271 14:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:08.271 14:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:08.271 14:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:09.649 nvme0n1 00:17:09.907 14:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:17:09.907 14:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:17:09.907 14:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.165 14:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.165 14:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:10.165 14:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:10.424 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:10.424 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.424 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.424 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.424 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:17:10.424 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:10.424 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:10.682 nvme0n1 00:17:10.682 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:17:10.682 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:17:10.682 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.940 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.940 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:10.940 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:11.199 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:11.199 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.199 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.199 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.199 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:YmFjZjI3MmM1MWEyYTRmNjU4YWExYThlN2Y4ZTg5OTTLxYpf: '' 2s 00:17:11.199 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:11.199 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:11.199 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:YmFjZjI3MmM1MWEyYTRmNjU4YWExYThlN2Y4ZTg5OTTLxYpf: 00:17:11.199 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:17:11.199 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:11.199 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:11.199 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:YmFjZjI3MmM1MWEyYTRmNjU4YWExYThlN2Y4ZTg5OTTLxYpf: ]] 00:17:11.199 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:YmFjZjI3MmM1MWEyYTRmNjU4YWExYThlN2Y4ZTg5OTTLxYpf: 00:17:11.199 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:17:11.199 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:11.199 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:13.731 14:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:17:13.731 14:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:17:13.731 14:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:13.731 14:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:13.731 14:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:13.731 14:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:17:13.731 14:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:17:13.731 14:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key key2 00:17:13.731 14:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.731 14:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.731 14:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.731 14:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:YmIyYWUxYjhlNmQxN2I4ZWEyZTFmNWMxZWQ0OWMxMjRlYjIxODY0MzZhZTJjMzRkY7WfSg==: 2s 00:17:13.731 14:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:13.731 14:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:13.731 14:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:17:13.731 14:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:YmIyYWUxYjhlNmQxN2I4ZWEyZTFmNWMxZWQ0OWMxMjRlYjIxODY0MzZhZTJjMzRkY7WfSg==: 00:17:13.731 14:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:13.731 14:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:13.731 14:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:17:13.731 14:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:YmIyYWUxYjhlNmQxN2I4ZWEyZTFmNWMxZWQ0OWMxMjRlYjIxODY0MzZhZTJjMzRkY7WfSg==: ]] 00:17:13.731 14:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:YmIyYWUxYjhlNmQxN2I4ZWEyZTFmNWMxZWQ0OWMxMjRlYjIxODY0MzZhZTJjMzRkY7WfSg==: 00:17:13.731 14:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:13.731 14:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:15.639 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:17:15.639 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:17:15.639 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:15.639 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:15.639 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:15.639 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:17:15.639 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:17:15.639 14:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:15.639 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:15.639 14:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:15.639 14:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.639 14:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.639 14:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.639 14:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:15.639 14:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:15.639 14:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:17.015 nvme0n1 00:17:17.015 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:17.015 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.015 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.015 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.015 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:17.015 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:17.581 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:17:17.581 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:17:17.581 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:17.840 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.840 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:17.840 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.840 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.840 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.840 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:17:17.840 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:17:18.407 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:17:18.407 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:17:18.407 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.407 14:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.407 14:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:18.407 14:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.407 14:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.665 14:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.665 14:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:18.665 14:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:18.665 14:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:18.665 14:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:17:18.665 14:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:18.665 14:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:17:18.665 14:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:18.665 14:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:18.665 14:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:19.232 request: 00:17:19.232 { 00:17:19.232 "name": "nvme0", 00:17:19.232 "dhchap_key": "key1", 00:17:19.232 "dhchap_ctrlr_key": "key3", 00:17:19.232 "method": "bdev_nvme_set_keys", 00:17:19.232 "req_id": 1 00:17:19.232 } 00:17:19.232 Got JSON-RPC error response 00:17:19.232 response: 00:17:19.232 { 00:17:19.232 "code": -13, 00:17:19.232 "message": "Permission denied" 00:17:19.232 } 00:17:19.232 14:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:19.232 14:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:19.232 14:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:19.232 14:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:19.232 14:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:17:19.232 14:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:17:19.232 14:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:19.799 14:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:17:19.799 14:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:17:20.737 14:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:17:20.737 14:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.737 14:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:17:20.996 14:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:17:20.996 14:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:20.996 14:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.996 14:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.996 14:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.996 14:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:20.996 14:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:20.996 14:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:22.424 nvme0n1 00:17:22.424 14:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:22.424 14:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.424 14:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.424 14:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.424 14:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:22.424 14:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:22.424 14:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:22.424 14:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:17:22.424 14:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:22.424 14:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:17:22.424 14:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:22.424 14:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:22.424 14:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:23.386 request: 00:17:23.386 { 00:17:23.386 "name": "nvme0", 00:17:23.386 "dhchap_key": "key2", 00:17:23.386 "dhchap_ctrlr_key": "key0", 00:17:23.386 "method": "bdev_nvme_set_keys", 00:17:23.386 "req_id": 1 00:17:23.386 } 00:17:23.386 Got JSON-RPC error response 00:17:23.386 response: 00:17:23.386 { 00:17:23.386 "code": -13, 00:17:23.386 "message": "Permission denied" 00:17:23.386 } 00:17:23.386 14:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:23.386 14:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:23.386 14:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:23.386 14:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:23.386 14:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:17:23.386 14:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:17:23.386 14:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.386 14:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:17:23.386 14:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:17:24.765 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:17:24.765 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:17:24.765 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.765 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:17:24.765 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:17:24.765 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:17:24.765 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 656364 00:17:24.765 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 656364 ']' 00:17:24.765 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 656364 00:17:24.765 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:24.765 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:24.765 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 656364 00:17:24.765 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:24.765 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:24.765 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 656364' 00:17:24.765 killing process with pid 656364 00:17:24.765 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 656364 00:17:24.765 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 656364 00:17:25.335 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:17:25.335 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:25.335 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:17:25.335 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:25.335 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:17:25.335 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:25.335 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:25.335 rmmod nvme_tcp 00:17:25.335 rmmod nvme_fabrics 00:17:25.335 rmmod nvme_keyring 00:17:25.335 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:25.335 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:17:25.335 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:17:25.335 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 679025 ']' 00:17:25.335 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 679025 00:17:25.335 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 679025 ']' 00:17:25.335 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 679025 00:17:25.335 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:25.335 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:25.335 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 679025 00:17:25.335 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:25.335 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:25.335 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 679025' 00:17:25.335 killing process with pid 679025 00:17:25.335 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 679025 00:17:25.335 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 679025 00:17:25.594 14:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:25.594 14:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:25.594 14:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:25.594 14:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:17:25.594 14:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:17:25.594 14:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:25.594 14:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:17:25.594 14:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:25.594 14:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:25.594 14:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:25.594 14:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:25.594 14:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:27.498 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:27.498 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.qp2 /tmp/spdk.key-sha256.5t5 /tmp/spdk.key-sha384.Fc4 /tmp/spdk.key-sha512.LbG /tmp/spdk.key-sha512.jtg /tmp/spdk.key-sha384.me4 /tmp/spdk.key-sha256.UwE '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:17:27.498 00:17:27.498 real 3m30.877s 00:17:27.498 user 8m15.036s 00:17:27.498 sys 0m27.917s 00:17:27.498 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:27.498 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.498 ************************************ 00:17:27.498 END TEST nvmf_auth_target 00:17:27.498 ************************************ 00:17:27.757 14:54:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:17:27.757 14:54:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:27.757 14:54:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:27.757 14:54:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:27.757 14:54:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:27.757 ************************************ 00:17:27.757 START TEST nvmf_bdevio_no_huge 00:17:27.757 ************************************ 00:17:27.757 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:27.757 * Looking for test storage... 00:17:27.758 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:27.758 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:27.758 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version 00:17:27.758 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:27.758 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:27.758 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:27.758 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:27.758 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:27.758 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:17:27.758 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:17:27.758 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:17:27.758 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:17:27.758 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:17:27.758 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:17:27.758 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:17:27.758 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:27.758 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:17:27.758 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:17:27.758 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:27.758 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:27.758 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:17:27.758 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:17:27.758 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:27.758 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:17:27.758 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:17:27.758 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:17:27.758 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:17:27.758 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:27.758 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:17:27.758 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:17:27.758 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:27.758 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:27.758 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:17:27.758 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:27.758 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:27.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:27.758 --rc genhtml_branch_coverage=1 00:17:27.758 --rc genhtml_function_coverage=1 00:17:27.758 --rc genhtml_legend=1 00:17:27.758 --rc geninfo_all_blocks=1 00:17:27.758 --rc geninfo_unexecuted_blocks=1 00:17:27.758 00:17:27.758 ' 00:17:27.758 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:27.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:27.758 --rc genhtml_branch_coverage=1 00:17:27.758 --rc genhtml_function_coverage=1 00:17:27.758 --rc genhtml_legend=1 00:17:27.758 --rc geninfo_all_blocks=1 00:17:27.758 --rc geninfo_unexecuted_blocks=1 00:17:27.758 00:17:27.758 ' 00:17:27.758 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:27.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:27.758 --rc genhtml_branch_coverage=1 00:17:27.758 --rc genhtml_function_coverage=1 00:17:27.758 --rc genhtml_legend=1 00:17:27.758 --rc geninfo_all_blocks=1 00:17:27.758 --rc geninfo_unexecuted_blocks=1 00:17:27.758 00:17:27.758 ' 00:17:27.758 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:27.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:27.758 --rc genhtml_branch_coverage=1 00:17:27.758 --rc genhtml_function_coverage=1 00:17:27.758 --rc genhtml_legend=1 00:17:27.758 --rc geninfo_all_blocks=1 00:17:27.758 --rc geninfo_unexecuted_blocks=1 00:17:27.758 00:17:27.758 ' 00:17:27.758 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:27.758 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:17:27.758 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:27.758 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:27.758 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:27.758 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:27.758 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:27.758 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:27.758 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:27.758 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:27.758 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:27.758 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:27.758 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:27.758 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:27.758 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:27.758 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:27.758 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:27.758 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:27.758 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:27.758 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:17:27.758 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:27.758 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:27.758 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:27.758 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.758 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.758 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.758 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:17:27.758 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.758 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:17:27.759 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:27.759 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:27.759 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:27.759 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:27.759 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:27.759 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:27.759 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:27.759 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:27.759 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:27.759 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:27.759 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:27.759 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:27.759 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:17:27.759 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:27.759 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:27.759 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:27.759 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:27.759 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:27.759 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:27.759 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:27.759 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:27.759 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:27.759 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:27.759 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:17:27.759 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:30.292 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:30.292 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:30.292 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:30.292 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:30.292 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:30.292 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.218 ms 00:17:30.292 00:17:30.292 --- 10.0.0.2 ping statistics --- 00:17:30.292 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:30.292 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:30.292 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:30.292 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:17:30.292 00:17:30.292 --- 10.0.0.1 ping statistics --- 00:17:30.292 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:30.292 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:30.292 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=684913 00:17:30.293 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:17:30.293 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 684913 00:17:30.293 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 684913 ']' 00:17:30.293 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:30.293 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:30.293 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:30.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:30.293 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:30.293 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:30.293 [2024-12-11 14:54:12.828874] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:17:30.293 [2024-12-11 14:54:12.828974] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:17:30.293 [2024-12-11 14:54:12.909554] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:30.293 [2024-12-11 14:54:12.968100] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:30.293 [2024-12-11 14:54:12.968157] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:30.293 [2024-12-11 14:54:12.968171] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:30.293 [2024-12-11 14:54:12.968182] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:30.293 [2024-12-11 14:54:12.968191] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:30.293 [2024-12-11 14:54:12.969344] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:17:30.293 [2024-12-11 14:54:12.969421] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:17:30.293 [2024-12-11 14:54:12.969378] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:17:30.293 [2024-12-11 14:54:12.969425] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:17:30.552 14:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:30.552 14:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:17:30.552 14:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:30.552 14:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:30.552 14:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:30.552 14:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:30.552 14:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:30.552 14:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.552 14:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:30.552 [2024-12-11 14:54:13.137201] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:30.552 14:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.552 14:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:30.552 14:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.552 14:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:30.552 Malloc0 00:17:30.552 14:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.552 14:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:30.552 14:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.552 14:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:30.552 14:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.552 14:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:30.552 14:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.552 14:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:30.552 14:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.552 14:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:30.552 14:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.552 14:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:30.552 [2024-12-11 14:54:13.175513] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:30.552 14:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.552 14:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:17:30.552 14:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:30.552 14:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:17:30.552 14:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:17:30.552 14:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:30.552 14:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:30.552 { 00:17:30.552 "params": { 00:17:30.552 "name": "Nvme$subsystem", 00:17:30.552 "trtype": "$TEST_TRANSPORT", 00:17:30.552 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:30.552 "adrfam": "ipv4", 00:17:30.552 "trsvcid": "$NVMF_PORT", 00:17:30.552 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:30.552 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:30.552 "hdgst": ${hdgst:-false}, 00:17:30.552 "ddgst": ${ddgst:-false} 00:17:30.552 }, 00:17:30.552 "method": "bdev_nvme_attach_controller" 00:17:30.552 } 00:17:30.552 EOF 00:17:30.552 )") 00:17:30.552 14:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:17:30.552 14:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:17:30.552 14:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:17:30.552 14:54:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:17:30.552 "params": { 00:17:30.552 "name": "Nvme1", 00:17:30.552 "trtype": "tcp", 00:17:30.552 "traddr": "10.0.0.2", 00:17:30.552 "adrfam": "ipv4", 00:17:30.552 "trsvcid": "4420", 00:17:30.552 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:30.552 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:30.552 "hdgst": false, 00:17:30.552 "ddgst": false 00:17:30.552 }, 00:17:30.552 "method": "bdev_nvme_attach_controller" 00:17:30.552 }' 00:17:30.552 [2024-12-11 14:54:13.228356] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:17:30.552 [2024-12-11 14:54:13.228425] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid685057 ] 00:17:30.552 [2024-12-11 14:54:13.301316] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:30.812 [2024-12-11 14:54:13.367267] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:30.812 [2024-12-11 14:54:13.367317] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:17:30.812 [2024-12-11 14:54:13.367321] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:31.073 I/O targets: 00:17:31.073 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:31.073 00:17:31.073 00:17:31.073 CUnit - A unit testing framework for C - Version 2.1-3 00:17:31.073 http://cunit.sourceforge.net/ 00:17:31.073 00:17:31.073 00:17:31.073 Suite: bdevio tests on: Nvme1n1 00:17:31.073 Test: blockdev write read block ...passed 00:17:31.073 Test: blockdev write zeroes read block ...passed 00:17:31.073 Test: blockdev write zeroes read no split ...passed 00:17:31.073 Test: blockdev write zeroes read split ...passed 00:17:31.333 Test: blockdev write zeroes read split partial ...passed 00:17:31.333 Test: blockdev reset ...[2024-12-11 14:54:13.845988] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:17:31.333 [2024-12-11 14:54:13.846102] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe442f0 (9): Bad file descriptor 00:17:31.333 [2024-12-11 14:54:13.899911] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:17:31.333 passed 00:17:31.333 Test: blockdev write read 8 blocks ...passed 00:17:31.333 Test: blockdev write read size > 128k ...passed 00:17:31.333 Test: blockdev write read invalid size ...passed 00:17:31.333 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:31.333 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:31.333 Test: blockdev write read max offset ...passed 00:17:31.333 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:31.333 Test: blockdev writev readv 8 blocks ...passed 00:17:31.333 Test: blockdev writev readv 30 x 1block ...passed 00:17:31.591 Test: blockdev writev readv block ...passed 00:17:31.592 Test: blockdev writev readv size > 128k ...passed 00:17:31.592 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:31.592 Test: blockdev comparev and writev ...[2024-12-11 14:54:14.154738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:31.592 [2024-12-11 14:54:14.154774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:31.592 [2024-12-11 14:54:14.154799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:31.592 [2024-12-11 14:54:14.154815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:31.592 [2024-12-11 14:54:14.155137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:31.592 [2024-12-11 14:54:14.155169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:31.592 [2024-12-11 14:54:14.155192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:31.592 [2024-12-11 14:54:14.155209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:31.592 [2024-12-11 14:54:14.155522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:31.592 [2024-12-11 14:54:14.155554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:31.592 [2024-12-11 14:54:14.155578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:31.592 [2024-12-11 14:54:14.155595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:31.592 [2024-12-11 14:54:14.155901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:31.592 [2024-12-11 14:54:14.155926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:31.592 [2024-12-11 14:54:14.155947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:31.592 [2024-12-11 14:54:14.155962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:31.592 passed 00:17:31.592 Test: blockdev nvme passthru rw ...passed 00:17:31.592 Test: blockdev nvme passthru vendor specific ...[2024-12-11 14:54:14.238826] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:31.592 [2024-12-11 14:54:14.238855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:31.592 [2024-12-11 14:54:14.239000] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:31.592 [2024-12-11 14:54:14.239022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:31.592 [2024-12-11 14:54:14.239162] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:31.592 [2024-12-11 14:54:14.239184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:31.592 [2024-12-11 14:54:14.239317] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:31.592 [2024-12-11 14:54:14.239338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:31.592 passed 00:17:31.592 Test: blockdev nvme admin passthru ...passed 00:17:31.592 Test: blockdev copy ...passed 00:17:31.592 00:17:31.592 Run Summary: Type Total Ran Passed Failed Inactive 00:17:31.592 suites 1 1 n/a 0 0 00:17:31.592 tests 23 23 23 0 0 00:17:31.592 asserts 152 152 152 0 n/a 00:17:31.592 00:17:31.592 Elapsed time = 1.238 seconds 00:17:32.160 14:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:32.160 14:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.160 14:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:32.160 14:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.160 14:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:32.160 14:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:17:32.160 14:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:32.160 14:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:17:32.160 14:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:32.160 14:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:17:32.160 14:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:32.160 14:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:32.160 rmmod nvme_tcp 00:17:32.160 rmmod nvme_fabrics 00:17:32.160 rmmod nvme_keyring 00:17:32.160 14:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:32.160 14:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:17:32.160 14:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:17:32.160 14:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 684913 ']' 00:17:32.160 14:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 684913 00:17:32.160 14:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 684913 ']' 00:17:32.160 14:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 684913 00:17:32.160 14:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:17:32.160 14:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:32.160 14:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 684913 00:17:32.160 14:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:17:32.160 14:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:17:32.160 14:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 684913' 00:17:32.160 killing process with pid 684913 00:17:32.160 14:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 684913 00:17:32.160 14:54:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 684913 00:17:32.419 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:32.419 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:32.419 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:32.419 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:17:32.420 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:17:32.420 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:32.420 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:17:32.420 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:32.420 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:32.420 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:32.420 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:32.420 14:54:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:34.963 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:34.963 00:17:34.963 real 0m6.855s 00:17:34.963 user 0m11.743s 00:17:34.963 sys 0m2.667s 00:17:34.963 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:34.963 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:34.963 ************************************ 00:17:34.963 END TEST nvmf_bdevio_no_huge 00:17:34.963 ************************************ 00:17:34.963 14:54:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:34.963 14:54:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:34.963 14:54:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:34.963 14:54:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:34.963 ************************************ 00:17:34.963 START TEST nvmf_tls 00:17:34.963 ************************************ 00:17:34.963 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:34.963 * Looking for test storage... 00:17:34.963 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:34.963 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:34.963 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version 00:17:34.963 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:34.963 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:34.963 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:34.963 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:34.963 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:34.963 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:17:34.963 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:17:34.963 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:17:34.963 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:17:34.963 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:17:34.963 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:17:34.963 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:17:34.963 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:34.963 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:17:34.963 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:17:34.963 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:34.963 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:34.963 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:17:34.963 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:17:34.963 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:34.963 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:17:34.963 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:17:34.963 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:17:34.963 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:17:34.963 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:34.963 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:17:34.963 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:17:34.963 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:34.963 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:34.963 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:17:34.963 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:34.963 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:34.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:34.963 --rc genhtml_branch_coverage=1 00:17:34.963 --rc genhtml_function_coverage=1 00:17:34.963 --rc genhtml_legend=1 00:17:34.963 --rc geninfo_all_blocks=1 00:17:34.963 --rc geninfo_unexecuted_blocks=1 00:17:34.963 00:17:34.963 ' 00:17:34.963 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:34.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:34.963 --rc genhtml_branch_coverage=1 00:17:34.963 --rc genhtml_function_coverage=1 00:17:34.963 --rc genhtml_legend=1 00:17:34.963 --rc geninfo_all_blocks=1 00:17:34.963 --rc geninfo_unexecuted_blocks=1 00:17:34.963 00:17:34.963 ' 00:17:34.963 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:34.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:34.963 --rc genhtml_branch_coverage=1 00:17:34.963 --rc genhtml_function_coverage=1 00:17:34.963 --rc genhtml_legend=1 00:17:34.963 --rc geninfo_all_blocks=1 00:17:34.963 --rc geninfo_unexecuted_blocks=1 00:17:34.963 00:17:34.963 ' 00:17:34.963 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:34.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:34.963 --rc genhtml_branch_coverage=1 00:17:34.963 --rc genhtml_function_coverage=1 00:17:34.963 --rc genhtml_legend=1 00:17:34.963 --rc geninfo_all_blocks=1 00:17:34.963 --rc geninfo_unexecuted_blocks=1 00:17:34.963 00:17:34.963 ' 00:17:34.963 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:34.964 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:17:34.964 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:34.964 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:34.964 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:34.964 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:34.964 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:34.964 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:34.964 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:34.964 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:34.964 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:34.964 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:34.964 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:34.964 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:34.964 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:34.964 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:34.964 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:34.964 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:34.964 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:34.964 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:17:34.964 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:34.964 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:34.964 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:34.964 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.964 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.964 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.964 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:17:34.964 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.964 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:17:34.964 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:34.964 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:34.964 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:34.964 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:34.964 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:34.964 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:34.964 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:34.964 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:34.964 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:34.964 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:34.964 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:34.964 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:17:34.964 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:34.964 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:34.964 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:34.964 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:34.964 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:34.964 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:34.964 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:34.964 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:34.964 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:34.964 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:34.964 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:17:34.964 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:36.872 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:36.872 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:17:36.872 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:36.872 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:36.872 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:36.872 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:36.872 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:36.872 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:17:36.872 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:36.872 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:17:36.872 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:17:36.872 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:17:36.872 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:17:36.872 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:17:36.872 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:17:36.872 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:36.872 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:36.872 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:36.872 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:36.872 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:36.872 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:36.872 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:36.872 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:36.872 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:36.872 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:36.872 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:36.872 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:36.872 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:36.872 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:36.872 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:36.872 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:36.872 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:36.872 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:36.872 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:36.872 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:36.873 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:36.873 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:36.873 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:36.873 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:36.873 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:36.873 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:36.873 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:36.873 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:36.873 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:36.873 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:36.873 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:36.873 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:36.873 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:36.873 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:36.873 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:36.873 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:36.873 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:36.873 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:36.873 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:36.873 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:36.873 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:36.873 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:36.873 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:36.873 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:36.873 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:36.873 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:36.873 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:36.873 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:36.873 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:36.873 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:36.873 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:36.873 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:36.873 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:36.873 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:36.873 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:36.873 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:36.873 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:36.873 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:36.873 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:17:36.873 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:36.873 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:36.873 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:36.873 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:36.873 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:36.873 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:36.873 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:36.873 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:36.873 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:36.873 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:36.873 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:36.873 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:36.873 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:36.873 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:36.873 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:36.873 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:36.873 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:36.873 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:36.873 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:36.873 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:36.873 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:36.873 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:36.873 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:36.873 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:36.873 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:36.873 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:36.873 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:36.873 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.362 ms 00:17:36.873 00:17:36.873 --- 10.0.0.2 ping statistics --- 00:17:36.873 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:36.873 rtt min/avg/max/mdev = 0.362/0.362/0.362/0.000 ms 00:17:36.873 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:36.873 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:36.873 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:17:36.873 00:17:36.873 --- 10.0.0.1 ping statistics --- 00:17:36.873 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:36.873 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:17:36.873 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:36.873 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:17:36.873 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:36.873 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:36.873 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:36.873 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:36.873 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:36.873 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:36.873 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:36.873 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:17:36.873 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:36.873 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:36.873 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:36.873 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=687142 00:17:36.873 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:17:36.873 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 687142 00:17:36.873 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 687142 ']' 00:17:36.873 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:36.873 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:36.873 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:36.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:36.873 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:36.873 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:37.132 [2024-12-11 14:54:19.660210] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:17:37.132 [2024-12-11 14:54:19.660290] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:37.132 [2024-12-11 14:54:19.737479] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:37.132 [2024-12-11 14:54:19.795629] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:37.132 [2024-12-11 14:54:19.795684] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:37.132 [2024-12-11 14:54:19.795699] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:37.132 [2024-12-11 14:54:19.795711] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:37.132 [2024-12-11 14:54:19.795722] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:37.132 [2024-12-11 14:54:19.796344] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:37.132 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:37.132 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:37.132 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:37.132 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:37.132 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:37.391 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:37.391 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:17:37.391 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:17:37.649 true 00:17:37.649 14:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:37.649 14:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:17:37.908 14:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:17:37.908 14:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:17:37.908 14:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:38.166 14:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:17:38.166 14:54:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:38.426 14:54:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:17:38.426 14:54:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:17:38.426 14:54:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:17:38.687 14:54:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:38.687 14:54:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:17:38.947 14:54:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:17:38.947 14:54:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:17:38.947 14:54:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:38.947 14:54:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:17:39.208 14:54:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:17:39.208 14:54:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:17:39.208 14:54:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:17:39.466 14:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:39.466 14:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:17:39.725 14:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:17:39.725 14:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:17:39.725 14:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:17:40.294 14:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:40.294 14:54:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:17:40.553 14:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:17:40.553 14:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:17:40.553 14:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:17:40.553 14:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:17:40.553 14:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:17:40.553 14:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:17:40.553 14:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:17:40.553 14:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:17:40.553 14:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:17:40.554 14:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:40.554 14:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:17:40.554 14:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:17:40.554 14:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:17:40.554 14:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:17:40.554 14:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:17:40.554 14:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:17:40.554 14:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:17:40.554 14:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:40.554 14:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:17:40.554 14:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.NZYQamyXoG 00:17:40.554 14:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:17:40.554 14:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.LOM2WWJEbN 00:17:40.554 14:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:40.554 14:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:40.554 14:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.NZYQamyXoG 00:17:40.554 14:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.LOM2WWJEbN 00:17:40.554 14:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:40.812 14:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:17:41.071 14:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.NZYQamyXoG 00:17:41.071 14:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.NZYQamyXoG 00:17:41.071 14:54:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:41.637 [2024-12-11 14:54:24.127782] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:41.637 14:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:41.901 14:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:42.165 [2024-12-11 14:54:24.697313] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:42.165 [2024-12-11 14:54:24.697642] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:42.165 14:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:42.424 malloc0 00:17:42.424 14:54:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:42.682 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.NZYQamyXoG 00:17:42.940 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:17:43.200 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.NZYQamyXoG 00:17:53.191 Initializing NVMe Controllers 00:17:53.191 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:53.191 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:53.191 Initialization complete. Launching workers. 00:17:53.191 ======================================================== 00:17:53.191 Latency(us) 00:17:53.191 Device Information : IOPS MiB/s Average min max 00:17:53.191 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8682.25 33.92 7373.56 1078.78 9119.82 00:17:53.191 ======================================================== 00:17:53.191 Total : 8682.25 33.92 7373.56 1078.78 9119.82 00:17:53.191 00:17:53.191 14:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.NZYQamyXoG 00:17:53.191 14:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:53.191 14:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:53.191 14:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:53.191 14:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.NZYQamyXoG 00:17:53.191 14:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:53.191 14:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=689161 00:17:53.191 14:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:53.191 14:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 689161 /var/tmp/bdevperf.sock 00:17:53.191 14:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:53.191 14:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 689161 ']' 00:17:53.191 14:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:53.191 14:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:53.191 14:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:53.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:53.191 14:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:53.191 14:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:53.191 [2024-12-11 14:54:35.950729] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:17:53.191 [2024-12-11 14:54:35.950827] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid689161 ] 00:17:53.449 [2024-12-11 14:54:36.018257] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:53.449 [2024-12-11 14:54:36.076494] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:17:53.449 14:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:53.449 14:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:53.449 14:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.NZYQamyXoG 00:17:53.708 14:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:54.276 [2024-12-11 14:54:36.759962] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:54.276 TLSTESTn1 00:17:54.276 14:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:54.276 Running I/O for 10 seconds... 00:17:56.663 3540.00 IOPS, 13.83 MiB/s [2024-12-11T13:54:40.006Z] 3621.50 IOPS, 14.15 MiB/s [2024-12-11T13:54:41.387Z] 3608.67 IOPS, 14.10 MiB/s [2024-12-11T13:54:42.324Z] 3606.50 IOPS, 14.09 MiB/s [2024-12-11T13:54:43.261Z] 3617.20 IOPS, 14.13 MiB/s [2024-12-11T13:54:44.200Z] 3619.50 IOPS, 14.14 MiB/s [2024-12-11T13:54:45.137Z] 3614.43 IOPS, 14.12 MiB/s [2024-12-11T13:54:46.073Z] 3613.75 IOPS, 14.12 MiB/s [2024-12-11T13:54:47.043Z] 3602.11 IOPS, 14.07 MiB/s [2024-12-11T13:54:47.043Z] 3604.30 IOPS, 14.08 MiB/s 00:18:04.270 Latency(us) 00:18:04.270 [2024-12-11T13:54:47.043Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:04.270 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:04.270 Verification LBA range: start 0x0 length 0x2000 00:18:04.270 TLSTESTn1 : 10.02 3608.40 14.10 0.00 0.00 35405.95 10485.76 33787.45 00:18:04.270 [2024-12-11T13:54:47.043Z] =================================================================================================================== 00:18:04.270 [2024-12-11T13:54:47.043Z] Total : 3608.40 14.10 0.00 0.00 35405.95 10485.76 33787.45 00:18:04.270 { 00:18:04.270 "results": [ 00:18:04.270 { 00:18:04.270 "job": "TLSTESTn1", 00:18:04.270 "core_mask": "0x4", 00:18:04.270 "workload": "verify", 00:18:04.270 "status": "finished", 00:18:04.270 "verify_range": { 00:18:04.270 "start": 0, 00:18:04.270 "length": 8192 00:18:04.270 }, 00:18:04.270 "queue_depth": 128, 00:18:04.271 "io_size": 4096, 00:18:04.271 "runtime": 10.024113, 00:18:04.271 "iops": 3608.3990673289495, 00:18:04.271 "mibps": 14.095308856753709, 00:18:04.271 "io_failed": 0, 00:18:04.271 "io_timeout": 0, 00:18:04.271 "avg_latency_us": 35405.946517171, 00:18:04.271 "min_latency_us": 10485.76, 00:18:04.271 "max_latency_us": 33787.44888888889 00:18:04.271 } 00:18:04.271 ], 00:18:04.271 "core_count": 1 00:18:04.271 } 00:18:04.271 14:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:04.271 14:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 689161 00:18:04.271 14:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 689161 ']' 00:18:04.271 14:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 689161 00:18:04.271 14:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:04.271 14:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:04.271 14:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 689161 00:18:04.530 14:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:04.530 14:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:04.530 14:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 689161' 00:18:04.530 killing process with pid 689161 00:18:04.530 14:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 689161 00:18:04.530 Received shutdown signal, test time was about 10.000000 seconds 00:18:04.530 00:18:04.530 Latency(us) 00:18:04.530 [2024-12-11T13:54:47.303Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:04.530 [2024-12-11T13:54:47.303Z] =================================================================================================================== 00:18:04.530 [2024-12-11T13:54:47.303Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:04.530 14:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 689161 00:18:04.530 14:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.LOM2WWJEbN 00:18:04.530 14:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:04.530 14:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.LOM2WWJEbN 00:18:04.530 14:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:04.530 14:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:04.530 14:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:04.530 14:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:04.530 14:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.LOM2WWJEbN 00:18:04.530 14:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:04.530 14:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:04.530 14:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:04.530 14:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.LOM2WWJEbN 00:18:04.530 14:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:04.530 14:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=690477 00:18:04.530 14:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:04.530 14:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:04.788 14:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 690477 /var/tmp/bdevperf.sock 00:18:04.788 14:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 690477 ']' 00:18:04.788 14:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:04.788 14:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:04.788 14:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:04.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:04.788 14:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:04.788 14:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:04.788 [2024-12-11 14:54:47.342488] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:18:04.788 [2024-12-11 14:54:47.342590] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid690477 ] 00:18:04.788 [2024-12-11 14:54:47.410707] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:04.788 [2024-12-11 14:54:47.469124] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:18:05.046 14:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:05.046 14:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:05.046 14:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.LOM2WWJEbN 00:18:05.304 14:54:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:05.562 [2024-12-11 14:54:48.102600] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:05.562 [2024-12-11 14:54:48.111356] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:05.562 [2024-12-11 14:54:48.111643] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ff8f70 (107): Transport endpoint is not connected 00:18:05.562 [2024-12-11 14:54:48.112634] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ff8f70 (9): Bad file descriptor 00:18:05.562 [2024-12-11 14:54:48.113636] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:18:05.562 [2024-12-11 14:54:48.113655] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:05.562 [2024-12-11 14:54:48.113670] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:18:05.562 [2024-12-11 14:54:48.113685] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:18:05.562 request: 00:18:05.562 { 00:18:05.562 "name": "TLSTEST", 00:18:05.562 "trtype": "tcp", 00:18:05.562 "traddr": "10.0.0.2", 00:18:05.562 "adrfam": "ipv4", 00:18:05.562 "trsvcid": "4420", 00:18:05.562 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:05.562 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:05.562 "prchk_reftag": false, 00:18:05.562 "prchk_guard": false, 00:18:05.562 "hdgst": false, 00:18:05.562 "ddgst": false, 00:18:05.562 "psk": "key0", 00:18:05.562 "allow_unrecognized_csi": false, 00:18:05.562 "method": "bdev_nvme_attach_controller", 00:18:05.562 "req_id": 1 00:18:05.562 } 00:18:05.562 Got JSON-RPC error response 00:18:05.562 response: 00:18:05.562 { 00:18:05.562 "code": -5, 00:18:05.562 "message": "Input/output error" 00:18:05.562 } 00:18:05.562 14:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 690477 00:18:05.562 14:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 690477 ']' 00:18:05.562 14:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 690477 00:18:05.562 14:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:05.562 14:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:05.562 14:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 690477 00:18:05.562 14:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:05.562 14:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:05.562 14:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 690477' 00:18:05.562 killing process with pid 690477 00:18:05.562 14:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 690477 00:18:05.562 Received shutdown signal, test time was about 10.000000 seconds 00:18:05.562 00:18:05.562 Latency(us) 00:18:05.562 [2024-12-11T13:54:48.335Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:05.562 [2024-12-11T13:54:48.335Z] =================================================================================================================== 00:18:05.562 [2024-12-11T13:54:48.335Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:05.562 14:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 690477 00:18:05.821 14:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:05.821 14:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:05.821 14:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:05.821 14:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:05.821 14:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:05.821 14:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.NZYQamyXoG 00:18:05.821 14:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:05.821 14:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.NZYQamyXoG 00:18:05.821 14:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:05.821 14:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:05.821 14:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:05.821 14:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:05.821 14:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.NZYQamyXoG 00:18:05.821 14:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:05.821 14:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:05.821 14:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:18:05.821 14:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.NZYQamyXoG 00:18:05.821 14:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:05.821 14:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=690620 00:18:05.821 14:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:05.821 14:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:05.821 14:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 690620 /var/tmp/bdevperf.sock 00:18:05.821 14:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 690620 ']' 00:18:05.821 14:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:05.821 14:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:05.821 14:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:05.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:05.821 14:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:05.821 14:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:05.821 [2024-12-11 14:54:48.435557] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:18:05.821 [2024-12-11 14:54:48.435655] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid690620 ] 00:18:05.821 [2024-12-11 14:54:48.503651] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:05.821 [2024-12-11 14:54:48.560965] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:18:06.079 14:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:06.079 14:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:06.079 14:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.NZYQamyXoG 00:18:06.337 14:54:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:18:06.595 [2024-12-11 14:54:49.189553] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:06.595 [2024-12-11 14:54:49.195131] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:06.595 [2024-12-11 14:54:49.195163] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:06.595 [2024-12-11 14:54:49.195220] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:06.595 [2024-12-11 14:54:49.195707] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd58f70 (107): Transport endpoint is not connected 00:18:06.595 [2024-12-11 14:54:49.196695] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd58f70 (9): Bad file descriptor 00:18:06.595 [2024-12-11 14:54:49.197694] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:18:06.595 [2024-12-11 14:54:49.197716] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:06.595 [2024-12-11 14:54:49.197731] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:18:06.595 [2024-12-11 14:54:49.197746] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:18:06.595 request: 00:18:06.595 { 00:18:06.595 "name": "TLSTEST", 00:18:06.595 "trtype": "tcp", 00:18:06.595 "traddr": "10.0.0.2", 00:18:06.595 "adrfam": "ipv4", 00:18:06.595 "trsvcid": "4420", 00:18:06.595 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:06.595 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:06.595 "prchk_reftag": false, 00:18:06.595 "prchk_guard": false, 00:18:06.595 "hdgst": false, 00:18:06.595 "ddgst": false, 00:18:06.595 "psk": "key0", 00:18:06.595 "allow_unrecognized_csi": false, 00:18:06.595 "method": "bdev_nvme_attach_controller", 00:18:06.595 "req_id": 1 00:18:06.595 } 00:18:06.595 Got JSON-RPC error response 00:18:06.595 response: 00:18:06.595 { 00:18:06.595 "code": -5, 00:18:06.595 "message": "Input/output error" 00:18:06.595 } 00:18:06.595 14:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 690620 00:18:06.595 14:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 690620 ']' 00:18:06.595 14:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 690620 00:18:06.595 14:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:06.595 14:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:06.595 14:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 690620 00:18:06.595 14:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:06.595 14:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:06.595 14:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 690620' 00:18:06.595 killing process with pid 690620 00:18:06.595 14:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 690620 00:18:06.595 Received shutdown signal, test time was about 10.000000 seconds 00:18:06.595 00:18:06.595 Latency(us) 00:18:06.595 [2024-12-11T13:54:49.368Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:06.595 [2024-12-11T13:54:49.368Z] =================================================================================================================== 00:18:06.595 [2024-12-11T13:54:49.368Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:06.595 14:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 690620 00:18:06.853 14:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:06.853 14:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:06.853 14:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:06.853 14:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:06.853 14:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:06.853 14:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.NZYQamyXoG 00:18:06.853 14:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:06.853 14:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.NZYQamyXoG 00:18:06.853 14:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:06.853 14:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:06.853 14:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:06.853 14:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:06.853 14:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.NZYQamyXoG 00:18:06.853 14:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:06.853 14:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:18:06.853 14:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:06.853 14:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.NZYQamyXoG 00:18:06.853 14:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:06.853 14:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=690713 00:18:06.853 14:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:06.853 14:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:06.853 14:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 690713 /var/tmp/bdevperf.sock 00:18:06.853 14:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 690713 ']' 00:18:06.853 14:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:06.853 14:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:06.853 14:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:06.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:06.853 14:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:06.853 14:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:06.853 [2024-12-11 14:54:49.495093] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:18:06.853 [2024-12-11 14:54:49.495185] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid690713 ] 00:18:06.853 [2024-12-11 14:54:49.563319] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:06.853 [2024-12-11 14:54:49.620333] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:18:07.111 14:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:07.111 14:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:07.111 14:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.NZYQamyXoG 00:18:07.369 14:54:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:07.628 [2024-12-11 14:54:50.245359] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:07.628 [2024-12-11 14:54:50.256877] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:07.628 [2024-12-11 14:54:50.256935] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:07.628 [2024-12-11 14:54:50.256988] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:07.628 [2024-12-11 14:54:50.257553] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cfdf70 (107): Transport endpoint is not connected 00:18:07.628 [2024-12-11 14:54:50.258552] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cfdf70 (9): Bad file descriptor 00:18:07.628 [2024-12-11 14:54:50.259552] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:18:07.628 [2024-12-11 14:54:50.259572] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:07.628 [2024-12-11 14:54:50.259600] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:18:07.628 [2024-12-11 14:54:50.259616] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:18:07.628 request: 00:18:07.628 { 00:18:07.628 "name": "TLSTEST", 00:18:07.628 "trtype": "tcp", 00:18:07.628 "traddr": "10.0.0.2", 00:18:07.628 "adrfam": "ipv4", 00:18:07.628 "trsvcid": "4420", 00:18:07.628 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:07.628 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:07.628 "prchk_reftag": false, 00:18:07.628 "prchk_guard": false, 00:18:07.628 "hdgst": false, 00:18:07.628 "ddgst": false, 00:18:07.628 "psk": "key0", 00:18:07.628 "allow_unrecognized_csi": false, 00:18:07.628 "method": "bdev_nvme_attach_controller", 00:18:07.628 "req_id": 1 00:18:07.628 } 00:18:07.628 Got JSON-RPC error response 00:18:07.628 response: 00:18:07.628 { 00:18:07.628 "code": -5, 00:18:07.628 "message": "Input/output error" 00:18:07.628 } 00:18:07.628 14:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 690713 00:18:07.628 14:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 690713 ']' 00:18:07.628 14:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 690713 00:18:07.628 14:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:07.628 14:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:07.628 14:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 690713 00:18:07.628 14:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:07.628 14:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:07.628 14:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 690713' 00:18:07.628 killing process with pid 690713 00:18:07.628 14:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 690713 00:18:07.628 Received shutdown signal, test time was about 10.000000 seconds 00:18:07.628 00:18:07.628 Latency(us) 00:18:07.628 [2024-12-11T13:54:50.401Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:07.628 [2024-12-11T13:54:50.401Z] =================================================================================================================== 00:18:07.628 [2024-12-11T13:54:50.401Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:07.628 14:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 690713 00:18:07.887 14:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:07.887 14:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:07.887 14:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:07.887 14:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:07.888 14:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:07.888 14:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:07.888 14:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:07.888 14:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:07.888 14:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:07.888 14:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:07.888 14:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:07.888 14:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:07.888 14:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:07.888 14:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:07.888 14:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:07.888 14:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:07.888 14:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:18:07.888 14:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:07.888 14:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=690804 00:18:07.888 14:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:07.888 14:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:07.888 14:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 690804 /var/tmp/bdevperf.sock 00:18:07.888 14:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 690804 ']' 00:18:07.888 14:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:07.888 14:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:07.888 14:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:07.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:07.888 14:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:07.888 14:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:07.888 [2024-12-11 14:54:50.588808] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:18:07.888 [2024-12-11 14:54:50.588910] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid690804 ] 00:18:08.147 [2024-12-11 14:54:50.660899] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:08.147 [2024-12-11 14:54:50.719602] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:18:08.147 14:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:08.147 14:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:08.147 14:54:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:18:08.405 [2024-12-11 14:54:51.078958] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:18:08.405 [2024-12-11 14:54:51.079011] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:08.405 request: 00:18:08.405 { 00:18:08.405 "name": "key0", 00:18:08.405 "path": "", 00:18:08.405 "method": "keyring_file_add_key", 00:18:08.405 "req_id": 1 00:18:08.405 } 00:18:08.405 Got JSON-RPC error response 00:18:08.405 response: 00:18:08.405 { 00:18:08.405 "code": -1, 00:18:08.405 "message": "Operation not permitted" 00:18:08.405 } 00:18:08.405 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:08.664 [2024-12-11 14:54:51.343776] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:08.664 [2024-12-11 14:54:51.343848] bdev_nvme.c:6754:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:18:08.664 request: 00:18:08.664 { 00:18:08.664 "name": "TLSTEST", 00:18:08.664 "trtype": "tcp", 00:18:08.664 "traddr": "10.0.0.2", 00:18:08.664 "adrfam": "ipv4", 00:18:08.664 "trsvcid": "4420", 00:18:08.664 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:08.664 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:08.664 "prchk_reftag": false, 00:18:08.664 "prchk_guard": false, 00:18:08.664 "hdgst": false, 00:18:08.664 "ddgst": false, 00:18:08.664 "psk": "key0", 00:18:08.664 "allow_unrecognized_csi": false, 00:18:08.664 "method": "bdev_nvme_attach_controller", 00:18:08.664 "req_id": 1 00:18:08.664 } 00:18:08.664 Got JSON-RPC error response 00:18:08.664 response: 00:18:08.664 { 00:18:08.664 "code": -126, 00:18:08.664 "message": "Required key not available" 00:18:08.664 } 00:18:08.664 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 690804 00:18:08.664 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 690804 ']' 00:18:08.664 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 690804 00:18:08.664 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:08.664 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:08.664 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 690804 00:18:08.664 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:08.664 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:08.664 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 690804' 00:18:08.664 killing process with pid 690804 00:18:08.664 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 690804 00:18:08.664 Received shutdown signal, test time was about 10.000000 seconds 00:18:08.664 00:18:08.664 Latency(us) 00:18:08.664 [2024-12-11T13:54:51.437Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:08.664 [2024-12-11T13:54:51.437Z] =================================================================================================================== 00:18:08.664 [2024-12-11T13:54:51.437Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:08.664 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 690804 00:18:08.923 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:08.923 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:08.923 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:08.923 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:08.923 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:08.923 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 687142 00:18:08.923 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 687142 ']' 00:18:08.923 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 687142 00:18:08.923 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:08.923 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:08.923 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 687142 00:18:08.923 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:08.923 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:08.923 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 687142' 00:18:08.923 killing process with pid 687142 00:18:08.923 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 687142 00:18:08.923 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 687142 00:18:09.181 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:18:09.181 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:18:09.181 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:09.181 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:09.181 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:18:09.181 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:18:09.181 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:09.181 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:09.181 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:18:09.181 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.hUxNXkm2GK 00:18:09.181 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:09.181 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.hUxNXkm2GK 00:18:09.181 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:18:09.181 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:09.181 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:09.181 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:09.182 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=691062 00:18:09.182 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:09.182 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 691062 00:18:09.182 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 691062 ']' 00:18:09.182 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:09.182 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:09.182 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:09.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:09.182 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:09.182 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:09.440 [2024-12-11 14:54:51.995654] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:18:09.440 [2024-12-11 14:54:51.995747] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:09.440 [2024-12-11 14:54:52.067308] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:09.440 [2024-12-11 14:54:52.125787] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:09.440 [2024-12-11 14:54:52.125868] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:09.440 [2024-12-11 14:54:52.125882] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:09.440 [2024-12-11 14:54:52.125892] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:09.440 [2024-12-11 14:54:52.125911] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:09.440 [2024-12-11 14:54:52.126497] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:09.698 14:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:09.698 14:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:09.698 14:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:09.698 14:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:09.698 14:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:09.698 14:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:09.698 14:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.hUxNXkm2GK 00:18:09.698 14:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.hUxNXkm2GK 00:18:09.698 14:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:09.954 [2024-12-11 14:54:52.521410] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:09.954 14:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:10.212 14:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:10.473 [2024-12-11 14:54:53.066936] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:10.473 [2024-12-11 14:54:53.067206] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:10.474 14:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:10.733 malloc0 00:18:10.733 14:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:10.992 14:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.hUxNXkm2GK 00:18:11.251 14:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:11.509 14:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.hUxNXkm2GK 00:18:11.509 14:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:11.509 14:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:11.509 14:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:11.509 14:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.hUxNXkm2GK 00:18:11.509 14:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:11.509 14:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=691349 00:18:11.509 14:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:11.509 14:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 691349 /var/tmp/bdevperf.sock 00:18:11.509 14:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:11.509 14:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 691349 ']' 00:18:11.509 14:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:11.509 14:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:11.509 14:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:11.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:11.509 14:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:11.509 14:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:11.509 [2024-12-11 14:54:54.237787] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:18:11.509 [2024-12-11 14:54:54.237895] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid691349 ] 00:18:11.767 [2024-12-11 14:54:54.318229] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:11.767 [2024-12-11 14:54:54.389556] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:18:11.767 14:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:11.767 14:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:11.767 14:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.hUxNXkm2GK 00:18:12.332 14:54:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:12.589 [2024-12-11 14:54:55.138951] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:12.589 TLSTESTn1 00:18:12.589 14:54:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:12.589 Running I/O for 10 seconds... 00:18:14.911 3511.00 IOPS, 13.71 MiB/s [2024-12-11T13:54:58.623Z] 3561.50 IOPS, 13.91 MiB/s [2024-12-11T13:54:59.560Z] 3532.33 IOPS, 13.80 MiB/s [2024-12-11T13:55:00.500Z] 3546.50 IOPS, 13.85 MiB/s [2024-12-11T13:55:01.439Z] 3519.60 IOPS, 13.75 MiB/s [2024-12-11T13:55:02.378Z] 3523.17 IOPS, 13.76 MiB/s [2024-12-11T13:55:03.759Z] 3510.71 IOPS, 13.71 MiB/s [2024-12-11T13:55:04.699Z] 3515.75 IOPS, 13.73 MiB/s [2024-12-11T13:55:05.657Z] 3519.89 IOPS, 13.75 MiB/s [2024-12-11T13:55:05.657Z] 3520.60 IOPS, 13.75 MiB/s 00:18:22.884 Latency(us) 00:18:22.884 [2024-12-11T13:55:05.657Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:22.884 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:22.884 Verification LBA range: start 0x0 length 0x2000 00:18:22.884 TLSTESTn1 : 10.02 3525.51 13.77 0.00 0.00 36242.24 6213.78 31263.10 00:18:22.884 [2024-12-11T13:55:05.657Z] =================================================================================================================== 00:18:22.884 [2024-12-11T13:55:05.657Z] Total : 3525.51 13.77 0.00 0.00 36242.24 6213.78 31263.10 00:18:22.884 { 00:18:22.884 "results": [ 00:18:22.884 { 00:18:22.884 "job": "TLSTESTn1", 00:18:22.884 "core_mask": "0x4", 00:18:22.884 "workload": "verify", 00:18:22.884 "status": "finished", 00:18:22.884 "verify_range": { 00:18:22.884 "start": 0, 00:18:22.884 "length": 8192 00:18:22.884 }, 00:18:22.884 "queue_depth": 128, 00:18:22.884 "io_size": 4096, 00:18:22.884 "runtime": 10.021531, 00:18:22.884 "iops": 3525.5092260853157, 00:18:22.884 "mibps": 13.771520414395765, 00:18:22.884 "io_failed": 0, 00:18:22.884 "io_timeout": 0, 00:18:22.884 "avg_latency_us": 36242.236459975866, 00:18:22.884 "min_latency_us": 6213.783703703703, 00:18:22.884 "max_latency_us": 31263.09925925926 00:18:22.884 } 00:18:22.884 ], 00:18:22.884 "core_count": 1 00:18:22.884 } 00:18:22.884 14:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:22.884 14:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 691349 00:18:22.884 14:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 691349 ']' 00:18:22.884 14:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 691349 00:18:22.884 14:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:22.884 14:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:22.884 14:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 691349 00:18:22.884 14:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:22.884 14:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:22.884 14:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 691349' 00:18:22.884 killing process with pid 691349 00:18:22.884 14:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 691349 00:18:22.884 Received shutdown signal, test time was about 10.000000 seconds 00:18:22.884 00:18:22.884 Latency(us) 00:18:22.884 [2024-12-11T13:55:05.657Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:22.884 [2024-12-11T13:55:05.657Z] =================================================================================================================== 00:18:22.884 [2024-12-11T13:55:05.657Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:22.884 14:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 691349 00:18:23.144 14:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.hUxNXkm2GK 00:18:23.144 14:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.hUxNXkm2GK 00:18:23.144 14:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:23.144 14:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.hUxNXkm2GK 00:18:23.144 14:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:23.144 14:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:23.144 14:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:23.144 14:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:23.144 14:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.hUxNXkm2GK 00:18:23.144 14:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:23.144 14:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:23.144 14:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:23.144 14:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.hUxNXkm2GK 00:18:23.144 14:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:23.144 14:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=692668 00:18:23.144 14:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:23.144 14:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:23.144 14:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 692668 /var/tmp/bdevperf.sock 00:18:23.144 14:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 692668 ']' 00:18:23.144 14:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:23.144 14:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:23.144 14:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:23.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:23.144 14:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:23.144 14:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:23.144 [2024-12-11 14:55:05.720064] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:18:23.144 [2024-12-11 14:55:05.720142] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid692668 ] 00:18:23.145 [2024-12-11 14:55:05.787808] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:23.145 [2024-12-11 14:55:05.844558] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:18:23.403 14:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:23.403 14:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:23.403 14:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.hUxNXkm2GK 00:18:23.661 [2024-12-11 14:55:06.199423] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.hUxNXkm2GK': 0100666 00:18:23.661 [2024-12-11 14:55:06.199460] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:23.661 request: 00:18:23.661 { 00:18:23.661 "name": "key0", 00:18:23.661 "path": "/tmp/tmp.hUxNXkm2GK", 00:18:23.661 "method": "keyring_file_add_key", 00:18:23.661 "req_id": 1 00:18:23.661 } 00:18:23.661 Got JSON-RPC error response 00:18:23.661 response: 00:18:23.661 { 00:18:23.661 "code": -1, 00:18:23.661 "message": "Operation not permitted" 00:18:23.661 } 00:18:23.661 14:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:23.921 [2024-12-11 14:55:06.524408] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:23.921 [2024-12-11 14:55:06.524471] bdev_nvme.c:6754:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:18:23.921 request: 00:18:23.921 { 00:18:23.921 "name": "TLSTEST", 00:18:23.921 "trtype": "tcp", 00:18:23.921 "traddr": "10.0.0.2", 00:18:23.921 "adrfam": "ipv4", 00:18:23.921 "trsvcid": "4420", 00:18:23.921 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:23.921 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:23.921 "prchk_reftag": false, 00:18:23.921 "prchk_guard": false, 00:18:23.921 "hdgst": false, 00:18:23.922 "ddgst": false, 00:18:23.922 "psk": "key0", 00:18:23.922 "allow_unrecognized_csi": false, 00:18:23.922 "method": "bdev_nvme_attach_controller", 00:18:23.922 "req_id": 1 00:18:23.922 } 00:18:23.922 Got JSON-RPC error response 00:18:23.922 response: 00:18:23.922 { 00:18:23.922 "code": -126, 00:18:23.922 "message": "Required key not available" 00:18:23.922 } 00:18:23.922 14:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 692668 00:18:23.922 14:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 692668 ']' 00:18:23.922 14:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 692668 00:18:23.922 14:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:23.922 14:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:23.922 14:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 692668 00:18:23.922 14:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:23.922 14:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:23.922 14:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 692668' 00:18:23.922 killing process with pid 692668 00:18:23.922 14:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 692668 00:18:23.922 Received shutdown signal, test time was about 10.000000 seconds 00:18:23.922 00:18:23.922 Latency(us) 00:18:23.922 [2024-12-11T13:55:06.695Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:23.922 [2024-12-11T13:55:06.695Z] =================================================================================================================== 00:18:23.922 [2024-12-11T13:55:06.695Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:23.922 14:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 692668 00:18:24.181 14:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:24.181 14:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:24.181 14:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:24.181 14:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:24.181 14:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:24.181 14:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 691062 00:18:24.181 14:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 691062 ']' 00:18:24.181 14:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 691062 00:18:24.181 14:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:24.181 14:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:24.181 14:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 691062 00:18:24.181 14:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:24.181 14:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:24.181 14:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 691062' 00:18:24.181 killing process with pid 691062 00:18:24.181 14:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 691062 00:18:24.181 14:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 691062 00:18:24.439 14:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:18:24.439 14:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:24.439 14:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:24.439 14:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:24.439 14:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=692827 00:18:24.439 14:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:24.439 14:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 692827 00:18:24.439 14:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 692827 ']' 00:18:24.439 14:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:24.439 14:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:24.439 14:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:24.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:24.439 14:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:24.439 14:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:24.439 [2024-12-11 14:55:07.108708] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:18:24.439 [2024-12-11 14:55:07.108786] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:24.439 [2024-12-11 14:55:07.180688] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:24.697 [2024-12-11 14:55:07.236235] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:24.697 [2024-12-11 14:55:07.236288] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:24.697 [2024-12-11 14:55:07.236302] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:24.697 [2024-12-11 14:55:07.236313] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:24.697 [2024-12-11 14:55:07.236322] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:24.697 [2024-12-11 14:55:07.236893] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:24.697 14:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:24.697 14:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:24.697 14:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:24.697 14:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:24.697 14:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:24.697 14:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:24.697 14:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.hUxNXkm2GK 00:18:24.697 14:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:24.697 14:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.hUxNXkm2GK 00:18:24.697 14:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:18:24.697 14:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:24.697 14:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:18:24.697 14:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:24.697 14:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.hUxNXkm2GK 00:18:24.697 14:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.hUxNXkm2GK 00:18:24.697 14:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:24.955 [2024-12-11 14:55:07.623803] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:24.955 14:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:25.214 14:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:25.471 [2024-12-11 14:55:08.157261] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:25.471 [2024-12-11 14:55:08.157512] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:25.471 14:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:25.729 malloc0 00:18:25.730 14:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:26.295 14:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.hUxNXkm2GK 00:18:26.554 [2024-12-11 14:55:09.122409] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.hUxNXkm2GK': 0100666 00:18:26.555 [2024-12-11 14:55:09.122459] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:26.555 request: 00:18:26.555 { 00:18:26.555 "name": "key0", 00:18:26.555 "path": "/tmp/tmp.hUxNXkm2GK", 00:18:26.555 "method": "keyring_file_add_key", 00:18:26.555 "req_id": 1 00:18:26.555 } 00:18:26.555 Got JSON-RPC error response 00:18:26.555 response: 00:18:26.555 { 00:18:26.555 "code": -1, 00:18:26.555 "message": "Operation not permitted" 00:18:26.555 } 00:18:26.555 14:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:26.815 [2024-12-11 14:55:09.395218] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:18:26.815 [2024-12-11 14:55:09.395301] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:18:26.815 request: 00:18:26.815 { 00:18:26.815 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:26.815 "host": "nqn.2016-06.io.spdk:host1", 00:18:26.815 "psk": "key0", 00:18:26.815 "method": "nvmf_subsystem_add_host", 00:18:26.815 "req_id": 1 00:18:26.815 } 00:18:26.815 Got JSON-RPC error response 00:18:26.815 response: 00:18:26.815 { 00:18:26.815 "code": -32603, 00:18:26.815 "message": "Internal error" 00:18:26.815 } 00:18:26.815 14:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:26.815 14:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:26.815 14:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:26.815 14:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:26.815 14:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 692827 00:18:26.815 14:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 692827 ']' 00:18:26.815 14:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 692827 00:18:26.815 14:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:26.815 14:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:26.815 14:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 692827 00:18:26.815 14:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:26.815 14:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:26.815 14:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 692827' 00:18:26.815 killing process with pid 692827 00:18:26.815 14:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 692827 00:18:26.815 14:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 692827 00:18:27.074 14:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.hUxNXkm2GK 00:18:27.074 14:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:18:27.074 14:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:27.074 14:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:27.074 14:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:27.074 14:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=693120 00:18:27.074 14:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:27.074 14:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 693120 00:18:27.074 14:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 693120 ']' 00:18:27.075 14:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:27.075 14:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:27.075 14:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:27.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:27.075 14:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:27.075 14:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:27.075 [2024-12-11 14:55:09.759836] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:18:27.075 [2024-12-11 14:55:09.759921] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:27.075 [2024-12-11 14:55:09.842325] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:27.333 [2024-12-11 14:55:09.898417] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:27.333 [2024-12-11 14:55:09.898479] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:27.333 [2024-12-11 14:55:09.898507] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:27.333 [2024-12-11 14:55:09.898518] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:27.333 [2024-12-11 14:55:09.898529] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:27.333 [2024-12-11 14:55:09.899195] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:27.333 14:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:27.333 14:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:27.333 14:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:27.333 14:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:27.333 14:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:27.333 14:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:27.333 14:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.hUxNXkm2GK 00:18:27.333 14:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.hUxNXkm2GK 00:18:27.333 14:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:27.592 [2024-12-11 14:55:10.302390] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:27.592 14:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:27.851 14:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:28.111 [2024-12-11 14:55:10.847941] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:28.111 [2024-12-11 14:55:10.848236] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:28.111 14:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:28.371 malloc0 00:18:28.630 14:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:28.890 14:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.hUxNXkm2GK 00:18:29.149 14:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:29.407 14:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=693409 00:18:29.407 14:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:29.407 14:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 693409 /var/tmp/bdevperf.sock 00:18:29.407 14:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 693409 ']' 00:18:29.407 14:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:29.407 14:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:29.407 14:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:29.407 14:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:29.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:29.407 14:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:29.407 14:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:29.407 [2024-12-11 14:55:12.062761] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:18:29.407 [2024-12-11 14:55:12.062863] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid693409 ] 00:18:29.407 [2024-12-11 14:55:12.135459] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:29.665 [2024-12-11 14:55:12.195147] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:18:29.665 14:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:29.665 14:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:29.665 14:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.hUxNXkm2GK 00:18:29.923 14:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:30.180 [2024-12-11 14:55:12.823859] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:30.180 TLSTESTn1 00:18:30.180 14:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:18:30.743 14:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:18:30.743 "subsystems": [ 00:18:30.743 { 00:18:30.743 "subsystem": "keyring", 00:18:30.743 "config": [ 00:18:30.743 { 00:18:30.743 "method": "keyring_file_add_key", 00:18:30.743 "params": { 00:18:30.743 "name": "key0", 00:18:30.743 "path": "/tmp/tmp.hUxNXkm2GK" 00:18:30.743 } 00:18:30.743 } 00:18:30.743 ] 00:18:30.743 }, 00:18:30.743 { 00:18:30.743 "subsystem": "iobuf", 00:18:30.743 "config": [ 00:18:30.743 { 00:18:30.743 "method": "iobuf_set_options", 00:18:30.743 "params": { 00:18:30.743 "small_pool_count": 8192, 00:18:30.743 "large_pool_count": 1024, 00:18:30.743 "small_bufsize": 8192, 00:18:30.743 "large_bufsize": 135168, 00:18:30.743 "enable_numa": false 00:18:30.743 } 00:18:30.743 } 00:18:30.743 ] 00:18:30.743 }, 00:18:30.743 { 00:18:30.743 "subsystem": "sock", 00:18:30.743 "config": [ 00:18:30.743 { 00:18:30.743 "method": "sock_set_default_impl", 00:18:30.743 "params": { 00:18:30.743 "impl_name": "posix" 00:18:30.743 } 00:18:30.743 }, 00:18:30.743 { 00:18:30.743 "method": "sock_impl_set_options", 00:18:30.743 "params": { 00:18:30.743 "impl_name": "ssl", 00:18:30.743 "recv_buf_size": 4096, 00:18:30.743 "send_buf_size": 4096, 00:18:30.744 "enable_recv_pipe": true, 00:18:30.744 "enable_quickack": false, 00:18:30.744 "enable_placement_id": 0, 00:18:30.744 "enable_zerocopy_send_server": true, 00:18:30.744 "enable_zerocopy_send_client": false, 00:18:30.744 "zerocopy_threshold": 0, 00:18:30.744 "tls_version": 0, 00:18:30.744 "enable_ktls": false 00:18:30.744 } 00:18:30.744 }, 00:18:30.744 { 00:18:30.744 "method": "sock_impl_set_options", 00:18:30.744 "params": { 00:18:30.744 "impl_name": "posix", 00:18:30.744 "recv_buf_size": 2097152, 00:18:30.744 "send_buf_size": 2097152, 00:18:30.744 "enable_recv_pipe": true, 00:18:30.744 "enable_quickack": false, 00:18:30.744 "enable_placement_id": 0, 00:18:30.744 "enable_zerocopy_send_server": true, 00:18:30.744 "enable_zerocopy_send_client": false, 00:18:30.744 "zerocopy_threshold": 0, 00:18:30.744 "tls_version": 0, 00:18:30.744 "enable_ktls": false 00:18:30.744 } 00:18:30.744 } 00:18:30.744 ] 00:18:30.744 }, 00:18:30.744 { 00:18:30.744 "subsystem": "vmd", 00:18:30.744 "config": [] 00:18:30.744 }, 00:18:30.744 { 00:18:30.744 "subsystem": "accel", 00:18:30.744 "config": [ 00:18:30.744 { 00:18:30.744 "method": "accel_set_options", 00:18:30.744 "params": { 00:18:30.744 "small_cache_size": 128, 00:18:30.744 "large_cache_size": 16, 00:18:30.744 "task_count": 2048, 00:18:30.744 "sequence_count": 2048, 00:18:30.744 "buf_count": 2048 00:18:30.744 } 00:18:30.744 } 00:18:30.744 ] 00:18:30.744 }, 00:18:30.744 { 00:18:30.744 "subsystem": "bdev", 00:18:30.744 "config": [ 00:18:30.744 { 00:18:30.744 "method": "bdev_set_options", 00:18:30.744 "params": { 00:18:30.744 "bdev_io_pool_size": 65535, 00:18:30.744 "bdev_io_cache_size": 256, 00:18:30.744 "bdev_auto_examine": true, 00:18:30.744 "iobuf_small_cache_size": 128, 00:18:30.744 "iobuf_large_cache_size": 16 00:18:30.744 } 00:18:30.744 }, 00:18:30.744 { 00:18:30.744 "method": "bdev_raid_set_options", 00:18:30.744 "params": { 00:18:30.744 "process_window_size_kb": 1024, 00:18:30.744 "process_max_bandwidth_mb_sec": 0 00:18:30.744 } 00:18:30.744 }, 00:18:30.744 { 00:18:30.744 "method": "bdev_iscsi_set_options", 00:18:30.744 "params": { 00:18:30.744 "timeout_sec": 30 00:18:30.744 } 00:18:30.744 }, 00:18:30.744 { 00:18:30.744 "method": "bdev_nvme_set_options", 00:18:30.744 "params": { 00:18:30.744 "action_on_timeout": "none", 00:18:30.744 "timeout_us": 0, 00:18:30.744 "timeout_admin_us": 0, 00:18:30.744 "keep_alive_timeout_ms": 10000, 00:18:30.744 "arbitration_burst": 0, 00:18:30.744 "low_priority_weight": 0, 00:18:30.744 "medium_priority_weight": 0, 00:18:30.744 "high_priority_weight": 0, 00:18:30.744 "nvme_adminq_poll_period_us": 10000, 00:18:30.744 "nvme_ioq_poll_period_us": 0, 00:18:30.744 "io_queue_requests": 0, 00:18:30.744 "delay_cmd_submit": true, 00:18:30.744 "transport_retry_count": 4, 00:18:30.744 "bdev_retry_count": 3, 00:18:30.744 "transport_ack_timeout": 0, 00:18:30.744 "ctrlr_loss_timeout_sec": 0, 00:18:30.744 "reconnect_delay_sec": 0, 00:18:30.744 "fast_io_fail_timeout_sec": 0, 00:18:30.744 "disable_auto_failback": false, 00:18:30.744 "generate_uuids": false, 00:18:30.744 "transport_tos": 0, 00:18:30.744 "nvme_error_stat": false, 00:18:30.744 "rdma_srq_size": 0, 00:18:30.744 "io_path_stat": false, 00:18:30.744 "allow_accel_sequence": false, 00:18:30.744 "rdma_max_cq_size": 0, 00:18:30.744 "rdma_cm_event_timeout_ms": 0, 00:18:30.744 "dhchap_digests": [ 00:18:30.744 "sha256", 00:18:30.744 "sha384", 00:18:30.744 "sha512" 00:18:30.744 ], 00:18:30.744 "dhchap_dhgroups": [ 00:18:30.744 "null", 00:18:30.744 "ffdhe2048", 00:18:30.744 "ffdhe3072", 00:18:30.744 "ffdhe4096", 00:18:30.744 "ffdhe6144", 00:18:30.744 "ffdhe8192" 00:18:30.744 ], 00:18:30.744 "rdma_umr_per_io": false 00:18:30.744 } 00:18:30.744 }, 00:18:30.744 { 00:18:30.744 "method": "bdev_nvme_set_hotplug", 00:18:30.744 "params": { 00:18:30.744 "period_us": 100000, 00:18:30.744 "enable": false 00:18:30.744 } 00:18:30.744 }, 00:18:30.744 { 00:18:30.744 "method": "bdev_malloc_create", 00:18:30.744 "params": { 00:18:30.744 "name": "malloc0", 00:18:30.744 "num_blocks": 8192, 00:18:30.744 "block_size": 4096, 00:18:30.744 "physical_block_size": 4096, 00:18:30.744 "uuid": "9059da58-8ed6-4495-84d6-f4e42aa2be19", 00:18:30.744 "optimal_io_boundary": 0, 00:18:30.744 "md_size": 0, 00:18:30.744 "dif_type": 0, 00:18:30.744 "dif_is_head_of_md": false, 00:18:30.744 "dif_pi_format": 0 00:18:30.744 } 00:18:30.744 }, 00:18:30.744 { 00:18:30.744 "method": "bdev_wait_for_examine" 00:18:30.744 } 00:18:30.744 ] 00:18:30.744 }, 00:18:30.744 { 00:18:30.744 "subsystem": "nbd", 00:18:30.744 "config": [] 00:18:30.744 }, 00:18:30.744 { 00:18:30.744 "subsystem": "scheduler", 00:18:30.744 "config": [ 00:18:30.744 { 00:18:30.744 "method": "framework_set_scheduler", 00:18:30.744 "params": { 00:18:30.744 "name": "static" 00:18:30.744 } 00:18:30.744 } 00:18:30.744 ] 00:18:30.744 }, 00:18:30.744 { 00:18:30.744 "subsystem": "nvmf", 00:18:30.744 "config": [ 00:18:30.744 { 00:18:30.744 "method": "nvmf_set_config", 00:18:30.744 "params": { 00:18:30.744 "discovery_filter": "match_any", 00:18:30.744 "admin_cmd_passthru": { 00:18:30.744 "identify_ctrlr": false 00:18:30.744 }, 00:18:30.744 "dhchap_digests": [ 00:18:30.744 "sha256", 00:18:30.744 "sha384", 00:18:30.744 "sha512" 00:18:30.744 ], 00:18:30.744 "dhchap_dhgroups": [ 00:18:30.744 "null", 00:18:30.744 "ffdhe2048", 00:18:30.744 "ffdhe3072", 00:18:30.744 "ffdhe4096", 00:18:30.744 "ffdhe6144", 00:18:30.744 "ffdhe8192" 00:18:30.744 ] 00:18:30.744 } 00:18:30.744 }, 00:18:30.744 { 00:18:30.744 "method": "nvmf_set_max_subsystems", 00:18:30.744 "params": { 00:18:30.744 "max_subsystems": 1024 00:18:30.744 } 00:18:30.744 }, 00:18:30.744 { 00:18:30.744 "method": "nvmf_set_crdt", 00:18:30.744 "params": { 00:18:30.744 "crdt1": 0, 00:18:30.744 "crdt2": 0, 00:18:30.744 "crdt3": 0 00:18:30.744 } 00:18:30.744 }, 00:18:30.744 { 00:18:30.744 "method": "nvmf_create_transport", 00:18:30.744 "params": { 00:18:30.744 "trtype": "TCP", 00:18:30.744 "max_queue_depth": 128, 00:18:30.744 "max_io_qpairs_per_ctrlr": 127, 00:18:30.744 "in_capsule_data_size": 4096, 00:18:30.744 "max_io_size": 131072, 00:18:30.744 "io_unit_size": 131072, 00:18:30.744 "max_aq_depth": 128, 00:18:30.744 "num_shared_buffers": 511, 00:18:30.744 "buf_cache_size": 4294967295, 00:18:30.744 "dif_insert_or_strip": false, 00:18:30.744 "zcopy": false, 00:18:30.744 "c2h_success": false, 00:18:30.744 "sock_priority": 0, 00:18:30.744 "abort_timeout_sec": 1, 00:18:30.744 "ack_timeout": 0, 00:18:30.744 "data_wr_pool_size": 0 00:18:30.744 } 00:18:30.744 }, 00:18:30.744 { 00:18:30.744 "method": "nvmf_create_subsystem", 00:18:30.744 "params": { 00:18:30.744 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:30.744 "allow_any_host": false, 00:18:30.744 "serial_number": "SPDK00000000000001", 00:18:30.744 "model_number": "SPDK bdev Controller", 00:18:30.744 "max_namespaces": 10, 00:18:30.744 "min_cntlid": 1, 00:18:30.744 "max_cntlid": 65519, 00:18:30.744 "ana_reporting": false 00:18:30.744 } 00:18:30.744 }, 00:18:30.744 { 00:18:30.744 "method": "nvmf_subsystem_add_host", 00:18:30.744 "params": { 00:18:30.744 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:30.744 "host": "nqn.2016-06.io.spdk:host1", 00:18:30.744 "psk": "key0" 00:18:30.744 } 00:18:30.744 }, 00:18:30.744 { 00:18:30.744 "method": "nvmf_subsystem_add_ns", 00:18:30.744 "params": { 00:18:30.744 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:30.744 "namespace": { 00:18:30.744 "nsid": 1, 00:18:30.744 "bdev_name": "malloc0", 00:18:30.744 "nguid": "9059DA588ED6449584D6F4E42AA2BE19", 00:18:30.744 "uuid": "9059da58-8ed6-4495-84d6-f4e42aa2be19", 00:18:30.744 "no_auto_visible": false 00:18:30.744 } 00:18:30.744 } 00:18:30.744 }, 00:18:30.744 { 00:18:30.744 "method": "nvmf_subsystem_add_listener", 00:18:30.744 "params": { 00:18:30.744 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:30.744 "listen_address": { 00:18:30.744 "trtype": "TCP", 00:18:30.744 "adrfam": "IPv4", 00:18:30.744 "traddr": "10.0.0.2", 00:18:30.744 "trsvcid": "4420" 00:18:30.744 }, 00:18:30.744 "secure_channel": true 00:18:30.744 } 00:18:30.744 } 00:18:30.744 ] 00:18:30.744 } 00:18:30.744 ] 00:18:30.744 }' 00:18:30.744 14:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:31.001 14:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:18:31.001 "subsystems": [ 00:18:31.001 { 00:18:31.001 "subsystem": "keyring", 00:18:31.001 "config": [ 00:18:31.001 { 00:18:31.001 "method": "keyring_file_add_key", 00:18:31.001 "params": { 00:18:31.001 "name": "key0", 00:18:31.001 "path": "/tmp/tmp.hUxNXkm2GK" 00:18:31.001 } 00:18:31.002 } 00:18:31.002 ] 00:18:31.002 }, 00:18:31.002 { 00:18:31.002 "subsystem": "iobuf", 00:18:31.002 "config": [ 00:18:31.002 { 00:18:31.002 "method": "iobuf_set_options", 00:18:31.002 "params": { 00:18:31.002 "small_pool_count": 8192, 00:18:31.002 "large_pool_count": 1024, 00:18:31.002 "small_bufsize": 8192, 00:18:31.002 "large_bufsize": 135168, 00:18:31.002 "enable_numa": false 00:18:31.002 } 00:18:31.002 } 00:18:31.002 ] 00:18:31.002 }, 00:18:31.002 { 00:18:31.002 "subsystem": "sock", 00:18:31.002 "config": [ 00:18:31.002 { 00:18:31.002 "method": "sock_set_default_impl", 00:18:31.002 "params": { 00:18:31.002 "impl_name": "posix" 00:18:31.002 } 00:18:31.002 }, 00:18:31.002 { 00:18:31.002 "method": "sock_impl_set_options", 00:18:31.002 "params": { 00:18:31.002 "impl_name": "ssl", 00:18:31.002 "recv_buf_size": 4096, 00:18:31.002 "send_buf_size": 4096, 00:18:31.002 "enable_recv_pipe": true, 00:18:31.002 "enable_quickack": false, 00:18:31.002 "enable_placement_id": 0, 00:18:31.002 "enable_zerocopy_send_server": true, 00:18:31.002 "enable_zerocopy_send_client": false, 00:18:31.002 "zerocopy_threshold": 0, 00:18:31.002 "tls_version": 0, 00:18:31.002 "enable_ktls": false 00:18:31.002 } 00:18:31.002 }, 00:18:31.002 { 00:18:31.002 "method": "sock_impl_set_options", 00:18:31.002 "params": { 00:18:31.002 "impl_name": "posix", 00:18:31.002 "recv_buf_size": 2097152, 00:18:31.002 "send_buf_size": 2097152, 00:18:31.002 "enable_recv_pipe": true, 00:18:31.002 "enable_quickack": false, 00:18:31.002 "enable_placement_id": 0, 00:18:31.002 "enable_zerocopy_send_server": true, 00:18:31.002 "enable_zerocopy_send_client": false, 00:18:31.002 "zerocopy_threshold": 0, 00:18:31.002 "tls_version": 0, 00:18:31.002 "enable_ktls": false 00:18:31.002 } 00:18:31.002 } 00:18:31.002 ] 00:18:31.002 }, 00:18:31.002 { 00:18:31.002 "subsystem": "vmd", 00:18:31.002 "config": [] 00:18:31.002 }, 00:18:31.002 { 00:18:31.002 "subsystem": "accel", 00:18:31.002 "config": [ 00:18:31.002 { 00:18:31.002 "method": "accel_set_options", 00:18:31.002 "params": { 00:18:31.002 "small_cache_size": 128, 00:18:31.002 "large_cache_size": 16, 00:18:31.002 "task_count": 2048, 00:18:31.002 "sequence_count": 2048, 00:18:31.002 "buf_count": 2048 00:18:31.002 } 00:18:31.002 } 00:18:31.002 ] 00:18:31.002 }, 00:18:31.002 { 00:18:31.002 "subsystem": "bdev", 00:18:31.002 "config": [ 00:18:31.002 { 00:18:31.002 "method": "bdev_set_options", 00:18:31.002 "params": { 00:18:31.002 "bdev_io_pool_size": 65535, 00:18:31.002 "bdev_io_cache_size": 256, 00:18:31.002 "bdev_auto_examine": true, 00:18:31.002 "iobuf_small_cache_size": 128, 00:18:31.002 "iobuf_large_cache_size": 16 00:18:31.002 } 00:18:31.002 }, 00:18:31.002 { 00:18:31.002 "method": "bdev_raid_set_options", 00:18:31.002 "params": { 00:18:31.002 "process_window_size_kb": 1024, 00:18:31.002 "process_max_bandwidth_mb_sec": 0 00:18:31.002 } 00:18:31.002 }, 00:18:31.002 { 00:18:31.002 "method": "bdev_iscsi_set_options", 00:18:31.002 "params": { 00:18:31.002 "timeout_sec": 30 00:18:31.002 } 00:18:31.002 }, 00:18:31.002 { 00:18:31.002 "method": "bdev_nvme_set_options", 00:18:31.002 "params": { 00:18:31.002 "action_on_timeout": "none", 00:18:31.002 "timeout_us": 0, 00:18:31.002 "timeout_admin_us": 0, 00:18:31.002 "keep_alive_timeout_ms": 10000, 00:18:31.002 "arbitration_burst": 0, 00:18:31.002 "low_priority_weight": 0, 00:18:31.002 "medium_priority_weight": 0, 00:18:31.002 "high_priority_weight": 0, 00:18:31.002 "nvme_adminq_poll_period_us": 10000, 00:18:31.002 "nvme_ioq_poll_period_us": 0, 00:18:31.002 "io_queue_requests": 512, 00:18:31.002 "delay_cmd_submit": true, 00:18:31.002 "transport_retry_count": 4, 00:18:31.002 "bdev_retry_count": 3, 00:18:31.002 "transport_ack_timeout": 0, 00:18:31.002 "ctrlr_loss_timeout_sec": 0, 00:18:31.002 "reconnect_delay_sec": 0, 00:18:31.002 "fast_io_fail_timeout_sec": 0, 00:18:31.002 "disable_auto_failback": false, 00:18:31.002 "generate_uuids": false, 00:18:31.002 "transport_tos": 0, 00:18:31.002 "nvme_error_stat": false, 00:18:31.002 "rdma_srq_size": 0, 00:18:31.002 "io_path_stat": false, 00:18:31.002 "allow_accel_sequence": false, 00:18:31.002 "rdma_max_cq_size": 0, 00:18:31.002 "rdma_cm_event_timeout_ms": 0, 00:18:31.002 "dhchap_digests": [ 00:18:31.002 "sha256", 00:18:31.002 "sha384", 00:18:31.002 "sha512" 00:18:31.002 ], 00:18:31.002 "dhchap_dhgroups": [ 00:18:31.002 "null", 00:18:31.002 "ffdhe2048", 00:18:31.002 "ffdhe3072", 00:18:31.002 "ffdhe4096", 00:18:31.002 "ffdhe6144", 00:18:31.002 "ffdhe8192" 00:18:31.002 ], 00:18:31.002 "rdma_umr_per_io": false 00:18:31.002 } 00:18:31.002 }, 00:18:31.002 { 00:18:31.002 "method": "bdev_nvme_attach_controller", 00:18:31.002 "params": { 00:18:31.002 "name": "TLSTEST", 00:18:31.002 "trtype": "TCP", 00:18:31.002 "adrfam": "IPv4", 00:18:31.002 "traddr": "10.0.0.2", 00:18:31.002 "trsvcid": "4420", 00:18:31.002 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:31.002 "prchk_reftag": false, 00:18:31.002 "prchk_guard": false, 00:18:31.002 "ctrlr_loss_timeout_sec": 0, 00:18:31.002 "reconnect_delay_sec": 0, 00:18:31.002 "fast_io_fail_timeout_sec": 0, 00:18:31.002 "psk": "key0", 00:18:31.002 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:31.002 "hdgst": false, 00:18:31.002 "ddgst": false, 00:18:31.002 "multipath": "multipath" 00:18:31.002 } 00:18:31.002 }, 00:18:31.002 { 00:18:31.002 "method": "bdev_nvme_set_hotplug", 00:18:31.002 "params": { 00:18:31.002 "period_us": 100000, 00:18:31.002 "enable": false 00:18:31.002 } 00:18:31.002 }, 00:18:31.002 { 00:18:31.002 "method": "bdev_wait_for_examine" 00:18:31.002 } 00:18:31.002 ] 00:18:31.002 }, 00:18:31.002 { 00:18:31.002 "subsystem": "nbd", 00:18:31.002 "config": [] 00:18:31.002 } 00:18:31.002 ] 00:18:31.002 }' 00:18:31.002 14:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 693409 00:18:31.002 14:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 693409 ']' 00:18:31.002 14:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 693409 00:18:31.002 14:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:31.002 14:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:31.002 14:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 693409 00:18:31.002 14:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:31.002 14:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:31.002 14:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 693409' 00:18:31.002 killing process with pid 693409 00:18:31.002 14:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 693409 00:18:31.002 Received shutdown signal, test time was about 10.000000 seconds 00:18:31.002 00:18:31.002 Latency(us) 00:18:31.002 [2024-12-11T13:55:13.775Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:31.002 [2024-12-11T13:55:13.775Z] =================================================================================================================== 00:18:31.002 [2024-12-11T13:55:13.775Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:31.002 14:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 693409 00:18:31.262 14:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 693120 00:18:31.262 14:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 693120 ']' 00:18:31.262 14:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 693120 00:18:31.262 14:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:31.262 14:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:31.262 14:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 693120 00:18:31.262 14:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:31.262 14:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:31.262 14:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 693120' 00:18:31.262 killing process with pid 693120 00:18:31.262 14:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 693120 00:18:31.262 14:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 693120 00:18:31.522 14:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:18:31.522 14:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:31.522 14:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:31.522 14:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:18:31.522 "subsystems": [ 00:18:31.522 { 00:18:31.522 "subsystem": "keyring", 00:18:31.522 "config": [ 00:18:31.522 { 00:18:31.522 "method": "keyring_file_add_key", 00:18:31.522 "params": { 00:18:31.522 "name": "key0", 00:18:31.522 "path": "/tmp/tmp.hUxNXkm2GK" 00:18:31.522 } 00:18:31.522 } 00:18:31.522 ] 00:18:31.522 }, 00:18:31.522 { 00:18:31.522 "subsystem": "iobuf", 00:18:31.522 "config": [ 00:18:31.522 { 00:18:31.522 "method": "iobuf_set_options", 00:18:31.522 "params": { 00:18:31.522 "small_pool_count": 8192, 00:18:31.522 "large_pool_count": 1024, 00:18:31.522 "small_bufsize": 8192, 00:18:31.522 "large_bufsize": 135168, 00:18:31.522 "enable_numa": false 00:18:31.522 } 00:18:31.522 } 00:18:31.522 ] 00:18:31.522 }, 00:18:31.522 { 00:18:31.522 "subsystem": "sock", 00:18:31.522 "config": [ 00:18:31.522 { 00:18:31.522 "method": "sock_set_default_impl", 00:18:31.522 "params": { 00:18:31.522 "impl_name": "posix" 00:18:31.522 } 00:18:31.522 }, 00:18:31.522 { 00:18:31.522 "method": "sock_impl_set_options", 00:18:31.522 "params": { 00:18:31.522 "impl_name": "ssl", 00:18:31.522 "recv_buf_size": 4096, 00:18:31.522 "send_buf_size": 4096, 00:18:31.522 "enable_recv_pipe": true, 00:18:31.522 "enable_quickack": false, 00:18:31.522 "enable_placement_id": 0, 00:18:31.522 "enable_zerocopy_send_server": true, 00:18:31.522 "enable_zerocopy_send_client": false, 00:18:31.522 "zerocopy_threshold": 0, 00:18:31.522 "tls_version": 0, 00:18:31.522 "enable_ktls": false 00:18:31.522 } 00:18:31.522 }, 00:18:31.522 { 00:18:31.522 "method": "sock_impl_set_options", 00:18:31.522 "params": { 00:18:31.522 "impl_name": "posix", 00:18:31.522 "recv_buf_size": 2097152, 00:18:31.522 "send_buf_size": 2097152, 00:18:31.522 "enable_recv_pipe": true, 00:18:31.522 "enable_quickack": false, 00:18:31.522 "enable_placement_id": 0, 00:18:31.522 "enable_zerocopy_send_server": true, 00:18:31.522 "enable_zerocopy_send_client": false, 00:18:31.522 "zerocopy_threshold": 0, 00:18:31.522 "tls_version": 0, 00:18:31.522 "enable_ktls": false 00:18:31.522 } 00:18:31.522 } 00:18:31.522 ] 00:18:31.522 }, 00:18:31.522 { 00:18:31.522 "subsystem": "vmd", 00:18:31.522 "config": [] 00:18:31.522 }, 00:18:31.522 { 00:18:31.522 "subsystem": "accel", 00:18:31.522 "config": [ 00:18:31.522 { 00:18:31.522 "method": "accel_set_options", 00:18:31.522 "params": { 00:18:31.522 "small_cache_size": 128, 00:18:31.522 "large_cache_size": 16, 00:18:31.522 "task_count": 2048, 00:18:31.522 "sequence_count": 2048, 00:18:31.522 "buf_count": 2048 00:18:31.522 } 00:18:31.522 } 00:18:31.522 ] 00:18:31.522 }, 00:18:31.522 { 00:18:31.522 "subsystem": "bdev", 00:18:31.522 "config": [ 00:18:31.522 { 00:18:31.522 "method": "bdev_set_options", 00:18:31.522 "params": { 00:18:31.522 "bdev_io_pool_size": 65535, 00:18:31.522 "bdev_io_cache_size": 256, 00:18:31.522 "bdev_auto_examine": true, 00:18:31.522 "iobuf_small_cache_size": 128, 00:18:31.522 "iobuf_large_cache_size": 16 00:18:31.522 } 00:18:31.522 }, 00:18:31.522 { 00:18:31.522 "method": "bdev_raid_set_options", 00:18:31.522 "params": { 00:18:31.522 "process_window_size_kb": 1024, 00:18:31.522 "process_max_bandwidth_mb_sec": 0 00:18:31.522 } 00:18:31.522 }, 00:18:31.522 { 00:18:31.522 "method": "bdev_iscsi_set_options", 00:18:31.522 "params": { 00:18:31.522 "timeout_sec": 30 00:18:31.522 } 00:18:31.522 }, 00:18:31.522 { 00:18:31.522 "method": "bdev_nvme_set_options", 00:18:31.522 "params": { 00:18:31.522 "action_on_timeout": "none", 00:18:31.522 "timeout_us": 0, 00:18:31.522 "timeout_admin_us": 0, 00:18:31.522 "keep_alive_timeout_ms": 10000, 00:18:31.522 "arbitration_burst": 0, 00:18:31.522 "low_priority_weight": 0, 00:18:31.522 "medium_priority_weight": 0, 00:18:31.522 "high_priority_weight": 0, 00:18:31.522 "nvme_adminq_poll_period_us": 10000, 00:18:31.522 "nvme_ioq_poll_period_us": 0, 00:18:31.522 "io_queue_requests": 0, 00:18:31.522 "delay_cmd_submit": true, 00:18:31.522 "transport_retry_count": 4, 00:18:31.522 "bdev_retry_count": 3, 00:18:31.522 "transport_ack_timeout": 0, 00:18:31.522 "ctrlr_loss_timeout_sec": 0, 00:18:31.522 "reconnect_delay_sec": 0, 00:18:31.522 "fast_io_fail_timeout_sec": 0, 00:18:31.522 "disable_auto_failback": false, 00:18:31.522 "generate_uuids": false, 00:18:31.522 "transport_tos": 0, 00:18:31.522 "nvme_error_stat": false, 00:18:31.522 "rdma_srq_size": 0, 00:18:31.522 "io_path_stat": false, 00:18:31.522 "allow_accel_sequence": false, 00:18:31.522 "rdma_max_cq_size": 0, 00:18:31.522 "rdma_cm_event_timeout_ms": 0, 00:18:31.522 "dhchap_digests": [ 00:18:31.522 "sha256", 00:18:31.522 "sha384", 00:18:31.522 "sha512" 00:18:31.522 ], 00:18:31.522 "dhchap_dhgroups": [ 00:18:31.522 "null", 00:18:31.522 "ffdhe2048", 00:18:31.522 "ffdhe3072", 00:18:31.522 "ffdhe4096", 00:18:31.522 "ffdhe6144", 00:18:31.522 "ffdhe8192" 00:18:31.522 ], 00:18:31.522 "rdma_umr_per_io": false 00:18:31.522 } 00:18:31.522 }, 00:18:31.522 { 00:18:31.522 "method": "bdev_nvme_set_hotplug", 00:18:31.522 "params": { 00:18:31.522 "period_us": 100000, 00:18:31.522 "enable": false 00:18:31.522 } 00:18:31.522 }, 00:18:31.522 { 00:18:31.522 "method": "bdev_malloc_create", 00:18:31.522 "params": { 00:18:31.522 "name": "malloc0", 00:18:31.522 "num_blocks": 8192, 00:18:31.522 "block_size": 4096, 00:18:31.522 "physical_block_size": 4096, 00:18:31.522 "uuid": "9059da58-8ed6-4495-84d6-f4e42aa2be19", 00:18:31.522 "optimal_io_boundary": 0, 00:18:31.522 "md_size": 0, 00:18:31.522 "dif_type": 0, 00:18:31.522 "dif_is_head_of_md": false, 00:18:31.522 "dif_pi_format": 0 00:18:31.522 } 00:18:31.522 }, 00:18:31.522 { 00:18:31.522 "method": "bdev_wait_for_examine" 00:18:31.522 } 00:18:31.522 ] 00:18:31.522 }, 00:18:31.522 { 00:18:31.522 "subsystem": "nbd", 00:18:31.522 "config": [] 00:18:31.522 }, 00:18:31.522 { 00:18:31.522 "subsystem": "scheduler", 00:18:31.522 "config": [ 00:18:31.522 { 00:18:31.522 "method": "framework_set_scheduler", 00:18:31.522 "params": { 00:18:31.522 "name": "static" 00:18:31.522 } 00:18:31.522 } 00:18:31.522 ] 00:18:31.522 }, 00:18:31.522 { 00:18:31.522 "subsystem": "nvmf", 00:18:31.522 "config": [ 00:18:31.522 { 00:18:31.522 "method": "nvmf_set_config", 00:18:31.522 "params": { 00:18:31.522 "discovery_filter": "match_any", 00:18:31.522 "admin_cmd_passthru": { 00:18:31.522 "identify_ctrlr": false 00:18:31.522 }, 00:18:31.522 "dhchap_digests": [ 00:18:31.522 "sha256", 00:18:31.522 "sha384", 00:18:31.522 "sha512" 00:18:31.522 ], 00:18:31.522 "dhchap_dhgroups": [ 00:18:31.522 "null", 00:18:31.522 "ffdhe2048", 00:18:31.522 "ffdhe3072", 00:18:31.522 "ffdhe4096", 00:18:31.522 "ffdhe6144", 00:18:31.522 "ffdhe8192" 00:18:31.522 ] 00:18:31.522 } 00:18:31.522 }, 00:18:31.522 { 00:18:31.522 "method": "nvmf_set_max_subsystems", 00:18:31.522 "params": { 00:18:31.522 "max_subsystems": 1024 00:18:31.522 } 00:18:31.522 }, 00:18:31.522 { 00:18:31.522 "method": "nvmf_set_crdt", 00:18:31.522 "params": { 00:18:31.522 "crdt1": 0, 00:18:31.522 "crdt2": 0, 00:18:31.522 "crdt3": 0 00:18:31.522 } 00:18:31.522 }, 00:18:31.522 { 00:18:31.523 "method": "nvmf_create_transport", 00:18:31.523 "params": { 00:18:31.523 "trtype": "TCP", 00:18:31.523 "max_queue_depth": 128, 00:18:31.523 "max_io_qpairs_per_ctrlr": 127, 00:18:31.523 "in_capsule_data_size": 4096, 00:18:31.523 "max_io_size": 131072, 00:18:31.523 "io_unit_size": 131072, 00:18:31.523 "max_aq_depth": 128, 00:18:31.523 "num_shared_buffers": 511, 00:18:31.523 "buf_cache_size": 4294967295, 00:18:31.523 "dif_insert_or_strip": false, 00:18:31.523 "zcopy": false, 00:18:31.523 "c2h_success": false, 00:18:31.523 "sock_priority": 0, 00:18:31.523 "abort_timeout_sec": 1, 00:18:31.523 "ack_timeout": 0, 00:18:31.523 "data_wr_pool_size": 0 00:18:31.523 } 00:18:31.523 }, 00:18:31.523 { 00:18:31.523 "method": "nvmf_create_subsystem", 00:18:31.523 "params": { 00:18:31.523 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:31.523 "allow_any_host": false, 00:18:31.523 "serial_number": "SPDK00000000000001", 00:18:31.523 "model_number": "SPDK bdev Controller", 00:18:31.523 "max_namespaces": 10, 00:18:31.523 "min_cntlid": 1, 00:18:31.523 "max_cntlid": 65519, 00:18:31.523 "ana_reporting": false 00:18:31.523 } 00:18:31.523 }, 00:18:31.523 { 00:18:31.523 "method": "nvmf_subsystem_add_host", 00:18:31.523 "params": { 00:18:31.523 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:31.523 "host": "nqn.2016-06.io.spdk:host1", 00:18:31.523 "psk": "key0" 00:18:31.523 } 00:18:31.523 }, 00:18:31.523 { 00:18:31.523 "method": "nvmf_subsystem_add_ns", 00:18:31.523 "params": { 00:18:31.523 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:31.523 "namespace": { 00:18:31.523 "nsid": 1, 00:18:31.523 "bdev_name": "malloc0", 00:18:31.523 "nguid": "9059DA588ED6449584D6F4E42AA2BE19", 00:18:31.523 "uuid": "9059da58-8ed6-4495-84d6-f4e42aa2be19", 00:18:31.523 "no_auto_visible": false 00:18:31.523 } 00:18:31.523 } 00:18:31.523 }, 00:18:31.523 { 00:18:31.523 "method": "nvmf_subsystem_add_listener", 00:18:31.523 "params": { 00:18:31.523 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:31.523 "listen_address": { 00:18:31.523 "trtype": "TCP", 00:18:31.523 "adrfam": "IPv4", 00:18:31.523 "traddr": "10.0.0.2", 00:18:31.523 "trsvcid": "4420" 00:18:31.523 }, 00:18:31.523 "secure_channel": true 00:18:31.523 } 00:18:31.523 } 00:18:31.523 ] 00:18:31.523 } 00:18:31.523 ] 00:18:31.523 }' 00:18:31.523 14:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:31.523 14:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=693694 00:18:31.523 14:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:18:31.523 14:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 693694 00:18:31.523 14:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 693694 ']' 00:18:31.523 14:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:31.523 14:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:31.523 14:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:31.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:31.523 14:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:31.523 14:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:31.523 [2024-12-11 14:55:14.152417] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:18:31.523 [2024-12-11 14:55:14.152505] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:31.523 [2024-12-11 14:55:14.222417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:31.523 [2024-12-11 14:55:14.273782] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:31.523 [2024-12-11 14:55:14.273855] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:31.523 [2024-12-11 14:55:14.273884] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:31.523 [2024-12-11 14:55:14.273896] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:31.523 [2024-12-11 14:55:14.273905] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:31.523 [2024-12-11 14:55:14.274507] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:31.783 [2024-12-11 14:55:14.514682] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:31.783 [2024-12-11 14:55:14.546704] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:31.783 [2024-12-11 14:55:14.546982] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:32.718 14:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:32.718 14:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:32.718 14:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:32.718 14:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:32.718 14:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:32.718 14:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:32.718 14:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=693846 00:18:32.718 14:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 693846 /var/tmp/bdevperf.sock 00:18:32.718 14:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 693846 ']' 00:18:32.718 14:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:32.718 14:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:18:32.718 14:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:32.718 14:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:18:32.718 "subsystems": [ 00:18:32.718 { 00:18:32.718 "subsystem": "keyring", 00:18:32.718 "config": [ 00:18:32.718 { 00:18:32.718 "method": "keyring_file_add_key", 00:18:32.718 "params": { 00:18:32.718 "name": "key0", 00:18:32.718 "path": "/tmp/tmp.hUxNXkm2GK" 00:18:32.718 } 00:18:32.718 } 00:18:32.718 ] 00:18:32.718 }, 00:18:32.718 { 00:18:32.718 "subsystem": "iobuf", 00:18:32.718 "config": [ 00:18:32.718 { 00:18:32.718 "method": "iobuf_set_options", 00:18:32.718 "params": { 00:18:32.719 "small_pool_count": 8192, 00:18:32.719 "large_pool_count": 1024, 00:18:32.719 "small_bufsize": 8192, 00:18:32.719 "large_bufsize": 135168, 00:18:32.719 "enable_numa": false 00:18:32.719 } 00:18:32.719 } 00:18:32.719 ] 00:18:32.719 }, 00:18:32.719 { 00:18:32.719 "subsystem": "sock", 00:18:32.719 "config": [ 00:18:32.719 { 00:18:32.719 "method": "sock_set_default_impl", 00:18:32.719 "params": { 00:18:32.719 "impl_name": "posix" 00:18:32.719 } 00:18:32.719 }, 00:18:32.719 { 00:18:32.719 "method": "sock_impl_set_options", 00:18:32.719 "params": { 00:18:32.719 "impl_name": "ssl", 00:18:32.719 "recv_buf_size": 4096, 00:18:32.719 "send_buf_size": 4096, 00:18:32.719 "enable_recv_pipe": true, 00:18:32.719 "enable_quickack": false, 00:18:32.719 "enable_placement_id": 0, 00:18:32.719 "enable_zerocopy_send_server": true, 00:18:32.719 "enable_zerocopy_send_client": false, 00:18:32.719 "zerocopy_threshold": 0, 00:18:32.719 "tls_version": 0, 00:18:32.719 "enable_ktls": false 00:18:32.719 } 00:18:32.719 }, 00:18:32.719 { 00:18:32.719 "method": "sock_impl_set_options", 00:18:32.719 "params": { 00:18:32.719 "impl_name": "posix", 00:18:32.719 "recv_buf_size": 2097152, 00:18:32.719 "send_buf_size": 2097152, 00:18:32.719 "enable_recv_pipe": true, 00:18:32.719 "enable_quickack": false, 00:18:32.719 "enable_placement_id": 0, 00:18:32.719 "enable_zerocopy_send_server": true, 00:18:32.719 "enable_zerocopy_send_client": false, 00:18:32.719 "zerocopy_threshold": 0, 00:18:32.719 "tls_version": 0, 00:18:32.719 "enable_ktls": false 00:18:32.719 } 00:18:32.719 } 00:18:32.719 ] 00:18:32.719 }, 00:18:32.719 { 00:18:32.719 "subsystem": "vmd", 00:18:32.719 "config": [] 00:18:32.719 }, 00:18:32.719 { 00:18:32.719 "subsystem": "accel", 00:18:32.719 "config": [ 00:18:32.719 { 00:18:32.719 "method": "accel_set_options", 00:18:32.719 "params": { 00:18:32.719 "small_cache_size": 128, 00:18:32.719 "large_cache_size": 16, 00:18:32.719 "task_count": 2048, 00:18:32.719 "sequence_count": 2048, 00:18:32.719 "buf_count": 2048 00:18:32.719 } 00:18:32.719 } 00:18:32.719 ] 00:18:32.719 }, 00:18:32.719 { 00:18:32.719 "subsystem": "bdev", 00:18:32.719 "config": [ 00:18:32.719 { 00:18:32.719 "method": "bdev_set_options", 00:18:32.719 "params": { 00:18:32.719 "bdev_io_pool_size": 65535, 00:18:32.719 "bdev_io_cache_size": 256, 00:18:32.719 "bdev_auto_examine": true, 00:18:32.719 "iobuf_small_cache_size": 128, 00:18:32.719 "iobuf_large_cache_size": 16 00:18:32.719 } 00:18:32.719 }, 00:18:32.719 { 00:18:32.719 "method": "bdev_raid_set_options", 00:18:32.719 "params": { 00:18:32.719 "process_window_size_kb": 1024, 00:18:32.719 "process_max_bandwidth_mb_sec": 0 00:18:32.719 } 00:18:32.719 }, 00:18:32.719 { 00:18:32.719 "method": "bdev_iscsi_set_options", 00:18:32.719 "params": { 00:18:32.719 "timeout_sec": 30 00:18:32.719 } 00:18:32.719 }, 00:18:32.719 { 00:18:32.719 "method": "bdev_nvme_set_options", 00:18:32.719 "params": { 00:18:32.719 "action_on_timeout": "none", 00:18:32.719 "timeout_us": 0, 00:18:32.719 "timeout_admin_us": 0, 00:18:32.719 "keep_alive_timeout_ms": 10000, 00:18:32.719 "arbitration_burst": 0, 00:18:32.719 "low_priority_weight": 0, 00:18:32.719 "medium_priority_weight": 0, 00:18:32.719 "high_priority_weight": 0, 00:18:32.719 "nvme_adminq_poll_period_us": 10000, 00:18:32.719 "nvme_ioq_poll_period_us": 0, 00:18:32.719 "io_queue_requests": 512, 00:18:32.719 "delay_cmd_submit": true, 00:18:32.719 "transport_retry_count": 4, 00:18:32.719 "bdev_retry_count": 3, 00:18:32.719 "transport_ack_timeout": 0, 00:18:32.719 "ctrlr_loss_timeout_sec": 0, 00:18:32.719 "reconnect_delay_sec": 0, 00:18:32.719 "fast_io_fail_timeout_sec": 0, 00:18:32.719 "disable_auto_failback": false, 00:18:32.719 "generate_uuids": false, 00:18:32.719 "transport_tos": 0, 00:18:32.719 "nvme_error_stat": false, 00:18:32.719 "rdma_srq_size": 0, 00:18:32.719 "io_path_stat": false, 00:18:32.719 "allow_accel_sequence": false, 00:18:32.719 "rdma_max_cq_size": 0, 00:18:32.719 "rdma_cm_event_timeout_ms": 0, 00:18:32.719 "dhchap_digests": [ 00:18:32.719 "sha256", 00:18:32.719 "sha384", 00:18:32.719 "sha512" 00:18:32.719 ], 00:18:32.719 "dhchap_dhgroups": [ 00:18:32.719 "null", 00:18:32.719 "ffdhe2048", 00:18:32.719 "ffdhe3072", 00:18:32.719 "ffdhe4096", 00:18:32.719 "ffdhe6144", 00:18:32.719 "ffdhe8192" 00:18:32.719 ], 00:18:32.719 "rdma_umr_per_io": false 00:18:32.719 } 00:18:32.719 }, 00:18:32.719 { 00:18:32.719 "method": "bdev_nvme_attach_controller", 00:18:32.719 "params": { 00:18:32.719 "name": "TLSTEST", 00:18:32.719 "trtype": "TCP", 00:18:32.719 "adrfam": "IPv4", 00:18:32.719 "traddr": "10.0.0.2", 00:18:32.719 "trsvcid": "4420", 00:18:32.719 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:32.719 "prchk_reftag": false, 00:18:32.719 "prchk_guard": false, 00:18:32.719 "ctrlr_loss_timeout_sec": 0, 00:18:32.719 "reconnect_delay_sec": 0, 00:18:32.719 "fast_io_fail_timeout_sec": 0, 00:18:32.719 "psk": "key0", 00:18:32.719 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:32.719 "hdgst": false, 00:18:32.719 "ddgst": false, 00:18:32.719 "multipath": "multipath" 00:18:32.719 } 00:18:32.719 }, 00:18:32.719 { 00:18:32.719 "method": "bdev_nvme_set_hotplug", 00:18:32.719 "params": { 00:18:32.719 "period_us": 100000, 00:18:32.719 "enable": false 00:18:32.719 } 00:18:32.719 }, 00:18:32.719 { 00:18:32.719 "method": "bdev_wait_for_examine" 00:18:32.719 } 00:18:32.719 ] 00:18:32.719 }, 00:18:32.719 { 00:18:32.719 "subsystem": "nbd", 00:18:32.719 "config": [] 00:18:32.719 } 00:18:32.719 ] 00:18:32.719 }' 00:18:32.719 14:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:32.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:32.719 14:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:32.719 14:55:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:32.719 [2024-12-11 14:55:15.212143] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:18:32.719 [2024-12-11 14:55:15.212224] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid693846 ] 00:18:32.719 [2024-12-11 14:55:15.279702] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:32.719 [2024-12-11 14:55:15.337703] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:18:32.978 [2024-12-11 14:55:15.520262] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:33.546 14:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:33.546 14:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:33.546 14:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:33.546 Running I/O for 10 seconds... 00:18:35.863 3428.00 IOPS, 13.39 MiB/s [2024-12-11T13:55:19.573Z] 3428.50 IOPS, 13.39 MiB/s [2024-12-11T13:55:20.511Z] 3466.00 IOPS, 13.54 MiB/s [2024-12-11T13:55:21.451Z] 3473.75 IOPS, 13.57 MiB/s [2024-12-11T13:55:22.387Z] 3481.20 IOPS, 13.60 MiB/s [2024-12-11T13:55:23.769Z] 3485.67 IOPS, 13.62 MiB/s [2024-12-11T13:55:24.339Z] 3492.57 IOPS, 13.64 MiB/s [2024-12-11T13:55:25.735Z] 3483.75 IOPS, 13.61 MiB/s [2024-12-11T13:55:26.670Z] 3463.78 IOPS, 13.53 MiB/s [2024-12-11T13:55:26.670Z] 3469.10 IOPS, 13.55 MiB/s 00:18:43.897 Latency(us) 00:18:43.897 [2024-12-11T13:55:26.670Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:43.897 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:43.897 Verification LBA range: start 0x0 length 0x2000 00:18:43.897 TLSTESTn1 : 10.03 3471.11 13.56 0.00 0.00 36805.02 6796.33 71070.15 00:18:43.897 [2024-12-11T13:55:26.670Z] =================================================================================================================== 00:18:43.897 [2024-12-11T13:55:26.670Z] Total : 3471.11 13.56 0.00 0.00 36805.02 6796.33 71070.15 00:18:43.897 { 00:18:43.897 "results": [ 00:18:43.897 { 00:18:43.897 "job": "TLSTESTn1", 00:18:43.897 "core_mask": "0x4", 00:18:43.897 "workload": "verify", 00:18:43.897 "status": "finished", 00:18:43.897 "verify_range": { 00:18:43.897 "start": 0, 00:18:43.897 "length": 8192 00:18:43.897 }, 00:18:43.897 "queue_depth": 128, 00:18:43.897 "io_size": 4096, 00:18:43.897 "runtime": 10.030497, 00:18:43.897 "iops": 3471.1141431974906, 00:18:43.897 "mibps": 13.559039621865198, 00:18:43.897 "io_failed": 0, 00:18:43.897 "io_timeout": 0, 00:18:43.897 "avg_latency_us": 36805.02287043686, 00:18:43.897 "min_latency_us": 6796.325925925926, 00:18:43.897 "max_latency_us": 71070.15111111112 00:18:43.897 } 00:18:43.897 ], 00:18:43.897 "core_count": 1 00:18:43.897 } 00:18:43.897 14:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:43.897 14:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 693846 00:18:43.897 14:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 693846 ']' 00:18:43.897 14:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 693846 00:18:43.897 14:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:43.897 14:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:43.897 14:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 693846 00:18:43.897 14:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:43.897 14:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:43.897 14:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 693846' 00:18:43.897 killing process with pid 693846 00:18:43.897 14:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 693846 00:18:43.897 Received shutdown signal, test time was about 10.000000 seconds 00:18:43.897 00:18:43.897 Latency(us) 00:18:43.897 [2024-12-11T13:55:26.670Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:43.897 [2024-12-11T13:55:26.670Z] =================================================================================================================== 00:18:43.897 [2024-12-11T13:55:26.670Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:43.897 14:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 693846 00:18:43.898 14:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 693694 00:18:43.898 14:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 693694 ']' 00:18:43.898 14:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 693694 00:18:43.898 14:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:43.898 14:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:43.898 14:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 693694 00:18:44.156 14:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:44.156 14:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:44.156 14:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 693694' 00:18:44.156 killing process with pid 693694 00:18:44.156 14:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 693694 00:18:44.156 14:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 693694 00:18:44.415 14:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:18:44.415 14:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:44.415 14:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:44.415 14:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:44.415 14:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=695197 00:18:44.415 14:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:44.415 14:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 695197 00:18:44.415 14:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 695197 ']' 00:18:44.415 14:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:44.415 14:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:44.415 14:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:44.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:44.415 14:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:44.415 14:55:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:44.415 [2024-12-11 14:55:26.992228] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:18:44.415 [2024-12-11 14:55:26.992314] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:44.415 [2024-12-11 14:55:27.066439] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:44.415 [2024-12-11 14:55:27.121569] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:44.415 [2024-12-11 14:55:27.121624] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:44.415 [2024-12-11 14:55:27.121653] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:44.415 [2024-12-11 14:55:27.121665] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:44.415 [2024-12-11 14:55:27.121677] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:44.415 [2024-12-11 14:55:27.122252] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:44.674 14:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:44.674 14:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:44.674 14:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:44.674 14:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:44.674 14:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:44.674 14:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:44.674 14:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.hUxNXkm2GK 00:18:44.674 14:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.hUxNXkm2GK 00:18:44.674 14:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:44.932 [2024-12-11 14:55:27.516285] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:44.932 14:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:45.190 14:55:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:45.448 [2024-12-11 14:55:28.045696] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:45.448 [2024-12-11 14:55:28.045970] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:45.448 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:45.705 malloc0 00:18:45.705 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:45.964 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.hUxNXkm2GK 00:18:46.221 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:46.480 14:55:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=695463 00:18:46.480 14:55:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:46.480 14:55:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:46.480 14:55:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 695463 /var/tmp/bdevperf.sock 00:18:46.480 14:55:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 695463 ']' 00:18:46.480 14:55:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:46.480 14:55:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:46.480 14:55:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:46.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:46.480 14:55:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:46.480 14:55:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:46.480 [2024-12-11 14:55:29.194852] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:18:46.480 [2024-12-11 14:55:29.194941] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid695463 ] 00:18:46.738 [2024-12-11 14:55:29.266126] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:46.738 [2024-12-11 14:55:29.324113] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:46.738 14:55:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:46.738 14:55:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:46.738 14:55:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.hUxNXkm2GK 00:18:46.996 14:55:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:47.254 [2024-12-11 14:55:29.960166] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:47.512 nvme0n1 00:18:47.512 14:55:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:47.512 Running I/O for 1 seconds... 00:18:48.504 3298.00 IOPS, 12.88 MiB/s 00:18:48.504 Latency(us) 00:18:48.504 [2024-12-11T13:55:31.277Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:48.504 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:48.504 Verification LBA range: start 0x0 length 0x2000 00:18:48.504 nvme0n1 : 1.02 3360.54 13.13 0.00 0.00 37748.52 8009.96 38641.97 00:18:48.504 [2024-12-11T13:55:31.277Z] =================================================================================================================== 00:18:48.504 [2024-12-11T13:55:31.277Z] Total : 3360.54 13.13 0.00 0.00 37748.52 8009.96 38641.97 00:18:48.504 { 00:18:48.504 "results": [ 00:18:48.504 { 00:18:48.504 "job": "nvme0n1", 00:18:48.504 "core_mask": "0x2", 00:18:48.504 "workload": "verify", 00:18:48.504 "status": "finished", 00:18:48.504 "verify_range": { 00:18:48.504 "start": 0, 00:18:48.504 "length": 8192 00:18:48.504 }, 00:18:48.504 "queue_depth": 128, 00:18:48.504 "io_size": 4096, 00:18:48.504 "runtime": 1.019478, 00:18:48.504 "iops": 3360.54333688417, 00:18:48.504 "mibps": 13.127122409703789, 00:18:48.504 "io_failed": 0, 00:18:48.504 "io_timeout": 0, 00:18:48.504 "avg_latency_us": 37748.52203844241, 00:18:48.504 "min_latency_us": 8009.955555555555, 00:18:48.504 "max_latency_us": 38641.96740740741 00:18:48.504 } 00:18:48.504 ], 00:18:48.504 "core_count": 1 00:18:48.504 } 00:18:48.504 14:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 695463 00:18:48.504 14:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 695463 ']' 00:18:48.504 14:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 695463 00:18:48.504 14:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:48.504 14:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:48.504 14:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 695463 00:18:48.504 14:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:48.504 14:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:48.504 14:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 695463' 00:18:48.504 killing process with pid 695463 00:18:48.504 14:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 695463 00:18:48.504 Received shutdown signal, test time was about 1.000000 seconds 00:18:48.504 00:18:48.504 Latency(us) 00:18:48.504 [2024-12-11T13:55:31.277Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:48.504 [2024-12-11T13:55:31.277Z] =================================================================================================================== 00:18:48.504 [2024-12-11T13:55:31.277Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:48.504 14:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 695463 00:18:48.763 14:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 695197 00:18:48.763 14:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 695197 ']' 00:18:48.763 14:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 695197 00:18:48.763 14:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:48.763 14:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:48.763 14:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 695197 00:18:48.763 14:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:48.763 14:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:48.763 14:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 695197' 00:18:48.763 killing process with pid 695197 00:18:48.763 14:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 695197 00:18:48.763 14:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 695197 00:18:49.021 14:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:18:49.021 14:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:49.021 14:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:49.021 14:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:49.021 14:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=695862 00:18:49.021 14:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:49.021 14:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 695862 00:18:49.021 14:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 695862 ']' 00:18:49.021 14:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:49.021 14:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:49.021 14:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:49.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:49.021 14:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:49.021 14:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:49.279 [2024-12-11 14:55:31.798346] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:18:49.279 [2024-12-11 14:55:31.798427] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:49.279 [2024-12-11 14:55:31.869411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:49.279 [2024-12-11 14:55:31.924180] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:49.279 [2024-12-11 14:55:31.924239] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:49.279 [2024-12-11 14:55:31.924267] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:49.279 [2024-12-11 14:55:31.924278] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:49.279 [2024-12-11 14:55:31.924288] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:49.279 [2024-12-11 14:55:31.924916] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:49.279 14:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:49.279 14:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:49.279 14:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:49.279 14:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:49.279 14:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:49.538 14:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:49.538 14:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:18:49.538 14:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.538 14:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:49.538 [2024-12-11 14:55:32.064332] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:49.538 malloc0 00:18:49.538 [2024-12-11 14:55:32.095338] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:49.538 [2024-12-11 14:55:32.095663] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:49.538 14:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.538 14:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=695887 00:18:49.538 14:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 695887 /var/tmp/bdevperf.sock 00:18:49.538 14:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 695887 ']' 00:18:49.539 14:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:49.539 14:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:49.539 14:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:49.539 14:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:49.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:49.539 14:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:49.539 14:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:49.539 [2024-12-11 14:55:32.169347] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:18:49.539 [2024-12-11 14:55:32.169424] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid695887 ] 00:18:49.539 [2024-12-11 14:55:32.235612] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:49.539 [2024-12-11 14:55:32.293333] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:49.796 14:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:49.796 14:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:49.796 14:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.hUxNXkm2GK 00:18:50.054 14:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:50.310 [2024-12-11 14:55:32.927394] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:50.310 nvme0n1 00:18:50.311 14:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:50.570 Running I/O for 1 seconds... 00:18:51.509 3410.00 IOPS, 13.32 MiB/s 00:18:51.509 Latency(us) 00:18:51.509 [2024-12-11T13:55:34.282Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:51.509 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:51.509 Verification LBA range: start 0x0 length 0x2000 00:18:51.509 nvme0n1 : 1.02 3473.48 13.57 0.00 0.00 36532.36 7330.32 37088.52 00:18:51.509 [2024-12-11T13:55:34.282Z] =================================================================================================================== 00:18:51.509 [2024-12-11T13:55:34.282Z] Total : 3473.48 13.57 0.00 0.00 36532.36 7330.32 37088.52 00:18:51.509 { 00:18:51.509 "results": [ 00:18:51.509 { 00:18:51.509 "job": "nvme0n1", 00:18:51.509 "core_mask": "0x2", 00:18:51.509 "workload": "verify", 00:18:51.509 "status": "finished", 00:18:51.509 "verify_range": { 00:18:51.509 "start": 0, 00:18:51.509 "length": 8192 00:18:51.509 }, 00:18:51.509 "queue_depth": 128, 00:18:51.509 "io_size": 4096, 00:18:51.509 "runtime": 1.018576, 00:18:51.509 "iops": 3473.47669687878, 00:18:51.509 "mibps": 13.568268347182734, 00:18:51.509 "io_failed": 0, 00:18:51.509 "io_timeout": 0, 00:18:51.509 "avg_latency_us": 36532.36489835228, 00:18:51.509 "min_latency_us": 7330.322962962963, 00:18:51.509 "max_latency_us": 37088.52148148148 00:18:51.509 } 00:18:51.509 ], 00:18:51.509 "core_count": 1 00:18:51.509 } 00:18:51.509 14:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:18:51.509 14:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.509 14:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:51.767 14:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.767 14:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:18:51.767 "subsystems": [ 00:18:51.767 { 00:18:51.767 "subsystem": "keyring", 00:18:51.767 "config": [ 00:18:51.767 { 00:18:51.767 "method": "keyring_file_add_key", 00:18:51.767 "params": { 00:18:51.767 "name": "key0", 00:18:51.767 "path": "/tmp/tmp.hUxNXkm2GK" 00:18:51.767 } 00:18:51.767 } 00:18:51.767 ] 00:18:51.767 }, 00:18:51.767 { 00:18:51.767 "subsystem": "iobuf", 00:18:51.767 "config": [ 00:18:51.767 { 00:18:51.767 "method": "iobuf_set_options", 00:18:51.767 "params": { 00:18:51.767 "small_pool_count": 8192, 00:18:51.767 "large_pool_count": 1024, 00:18:51.767 "small_bufsize": 8192, 00:18:51.767 "large_bufsize": 135168, 00:18:51.767 "enable_numa": false 00:18:51.767 } 00:18:51.767 } 00:18:51.767 ] 00:18:51.767 }, 00:18:51.767 { 00:18:51.767 "subsystem": "sock", 00:18:51.767 "config": [ 00:18:51.767 { 00:18:51.768 "method": "sock_set_default_impl", 00:18:51.768 "params": { 00:18:51.768 "impl_name": "posix" 00:18:51.768 } 00:18:51.768 }, 00:18:51.768 { 00:18:51.768 "method": "sock_impl_set_options", 00:18:51.768 "params": { 00:18:51.768 "impl_name": "ssl", 00:18:51.768 "recv_buf_size": 4096, 00:18:51.768 "send_buf_size": 4096, 00:18:51.768 "enable_recv_pipe": true, 00:18:51.768 "enable_quickack": false, 00:18:51.768 "enable_placement_id": 0, 00:18:51.768 "enable_zerocopy_send_server": true, 00:18:51.768 "enable_zerocopy_send_client": false, 00:18:51.768 "zerocopy_threshold": 0, 00:18:51.768 "tls_version": 0, 00:18:51.768 "enable_ktls": false 00:18:51.768 } 00:18:51.768 }, 00:18:51.768 { 00:18:51.768 "method": "sock_impl_set_options", 00:18:51.768 "params": { 00:18:51.768 "impl_name": "posix", 00:18:51.768 "recv_buf_size": 2097152, 00:18:51.768 "send_buf_size": 2097152, 00:18:51.768 "enable_recv_pipe": true, 00:18:51.768 "enable_quickack": false, 00:18:51.768 "enable_placement_id": 0, 00:18:51.768 "enable_zerocopy_send_server": true, 00:18:51.768 "enable_zerocopy_send_client": false, 00:18:51.768 "zerocopy_threshold": 0, 00:18:51.768 "tls_version": 0, 00:18:51.768 "enable_ktls": false 00:18:51.768 } 00:18:51.768 } 00:18:51.768 ] 00:18:51.768 }, 00:18:51.768 { 00:18:51.768 "subsystem": "vmd", 00:18:51.768 "config": [] 00:18:51.768 }, 00:18:51.768 { 00:18:51.768 "subsystem": "accel", 00:18:51.768 "config": [ 00:18:51.768 { 00:18:51.768 "method": "accel_set_options", 00:18:51.768 "params": { 00:18:51.768 "small_cache_size": 128, 00:18:51.768 "large_cache_size": 16, 00:18:51.768 "task_count": 2048, 00:18:51.768 "sequence_count": 2048, 00:18:51.768 "buf_count": 2048 00:18:51.768 } 00:18:51.768 } 00:18:51.768 ] 00:18:51.768 }, 00:18:51.768 { 00:18:51.768 "subsystem": "bdev", 00:18:51.768 "config": [ 00:18:51.768 { 00:18:51.768 "method": "bdev_set_options", 00:18:51.768 "params": { 00:18:51.768 "bdev_io_pool_size": 65535, 00:18:51.768 "bdev_io_cache_size": 256, 00:18:51.768 "bdev_auto_examine": true, 00:18:51.768 "iobuf_small_cache_size": 128, 00:18:51.768 "iobuf_large_cache_size": 16 00:18:51.768 } 00:18:51.768 }, 00:18:51.768 { 00:18:51.768 "method": "bdev_raid_set_options", 00:18:51.768 "params": { 00:18:51.768 "process_window_size_kb": 1024, 00:18:51.768 "process_max_bandwidth_mb_sec": 0 00:18:51.768 } 00:18:51.768 }, 00:18:51.768 { 00:18:51.768 "method": "bdev_iscsi_set_options", 00:18:51.768 "params": { 00:18:51.768 "timeout_sec": 30 00:18:51.768 } 00:18:51.768 }, 00:18:51.768 { 00:18:51.768 "method": "bdev_nvme_set_options", 00:18:51.768 "params": { 00:18:51.768 "action_on_timeout": "none", 00:18:51.768 "timeout_us": 0, 00:18:51.768 "timeout_admin_us": 0, 00:18:51.768 "keep_alive_timeout_ms": 10000, 00:18:51.768 "arbitration_burst": 0, 00:18:51.768 "low_priority_weight": 0, 00:18:51.768 "medium_priority_weight": 0, 00:18:51.768 "high_priority_weight": 0, 00:18:51.768 "nvme_adminq_poll_period_us": 10000, 00:18:51.768 "nvme_ioq_poll_period_us": 0, 00:18:51.768 "io_queue_requests": 0, 00:18:51.768 "delay_cmd_submit": true, 00:18:51.768 "transport_retry_count": 4, 00:18:51.768 "bdev_retry_count": 3, 00:18:51.768 "transport_ack_timeout": 0, 00:18:51.768 "ctrlr_loss_timeout_sec": 0, 00:18:51.768 "reconnect_delay_sec": 0, 00:18:51.768 "fast_io_fail_timeout_sec": 0, 00:18:51.768 "disable_auto_failback": false, 00:18:51.768 "generate_uuids": false, 00:18:51.768 "transport_tos": 0, 00:18:51.768 "nvme_error_stat": false, 00:18:51.768 "rdma_srq_size": 0, 00:18:51.768 "io_path_stat": false, 00:18:51.768 "allow_accel_sequence": false, 00:18:51.768 "rdma_max_cq_size": 0, 00:18:51.768 "rdma_cm_event_timeout_ms": 0, 00:18:51.768 "dhchap_digests": [ 00:18:51.768 "sha256", 00:18:51.768 "sha384", 00:18:51.768 "sha512" 00:18:51.768 ], 00:18:51.768 "dhchap_dhgroups": [ 00:18:51.768 "null", 00:18:51.768 "ffdhe2048", 00:18:51.768 "ffdhe3072", 00:18:51.768 "ffdhe4096", 00:18:51.768 "ffdhe6144", 00:18:51.768 "ffdhe8192" 00:18:51.768 ], 00:18:51.768 "rdma_umr_per_io": false 00:18:51.768 } 00:18:51.768 }, 00:18:51.768 { 00:18:51.768 "method": "bdev_nvme_set_hotplug", 00:18:51.768 "params": { 00:18:51.768 "period_us": 100000, 00:18:51.768 "enable": false 00:18:51.768 } 00:18:51.768 }, 00:18:51.768 { 00:18:51.768 "method": "bdev_malloc_create", 00:18:51.768 "params": { 00:18:51.768 "name": "malloc0", 00:18:51.768 "num_blocks": 8192, 00:18:51.768 "block_size": 4096, 00:18:51.768 "physical_block_size": 4096, 00:18:51.768 "uuid": "c22dfc89-d43f-4424-8a25-1e033b0f5788", 00:18:51.768 "optimal_io_boundary": 0, 00:18:51.768 "md_size": 0, 00:18:51.768 "dif_type": 0, 00:18:51.768 "dif_is_head_of_md": false, 00:18:51.768 "dif_pi_format": 0 00:18:51.768 } 00:18:51.768 }, 00:18:51.768 { 00:18:51.768 "method": "bdev_wait_for_examine" 00:18:51.768 } 00:18:51.768 ] 00:18:51.768 }, 00:18:51.768 { 00:18:51.768 "subsystem": "nbd", 00:18:51.768 "config": [] 00:18:51.768 }, 00:18:51.768 { 00:18:51.768 "subsystem": "scheduler", 00:18:51.768 "config": [ 00:18:51.768 { 00:18:51.768 "method": "framework_set_scheduler", 00:18:51.768 "params": { 00:18:51.768 "name": "static" 00:18:51.768 } 00:18:51.768 } 00:18:51.768 ] 00:18:51.768 }, 00:18:51.768 { 00:18:51.768 "subsystem": "nvmf", 00:18:51.768 "config": [ 00:18:51.768 { 00:18:51.768 "method": "nvmf_set_config", 00:18:51.768 "params": { 00:18:51.768 "discovery_filter": "match_any", 00:18:51.768 "admin_cmd_passthru": { 00:18:51.768 "identify_ctrlr": false 00:18:51.768 }, 00:18:51.768 "dhchap_digests": [ 00:18:51.768 "sha256", 00:18:51.768 "sha384", 00:18:51.768 "sha512" 00:18:51.768 ], 00:18:51.768 "dhchap_dhgroups": [ 00:18:51.768 "null", 00:18:51.768 "ffdhe2048", 00:18:51.768 "ffdhe3072", 00:18:51.768 "ffdhe4096", 00:18:51.768 "ffdhe6144", 00:18:51.768 "ffdhe8192" 00:18:51.768 ] 00:18:51.768 } 00:18:51.768 }, 00:18:51.768 { 00:18:51.768 "method": "nvmf_set_max_subsystems", 00:18:51.768 "params": { 00:18:51.768 "max_subsystems": 1024 00:18:51.768 } 00:18:51.768 }, 00:18:51.768 { 00:18:51.768 "method": "nvmf_set_crdt", 00:18:51.768 "params": { 00:18:51.768 "crdt1": 0, 00:18:51.768 "crdt2": 0, 00:18:51.768 "crdt3": 0 00:18:51.768 } 00:18:51.768 }, 00:18:51.768 { 00:18:51.768 "method": "nvmf_create_transport", 00:18:51.768 "params": { 00:18:51.768 "trtype": "TCP", 00:18:51.768 "max_queue_depth": 128, 00:18:51.768 "max_io_qpairs_per_ctrlr": 127, 00:18:51.768 "in_capsule_data_size": 4096, 00:18:51.768 "max_io_size": 131072, 00:18:51.768 "io_unit_size": 131072, 00:18:51.768 "max_aq_depth": 128, 00:18:51.768 "num_shared_buffers": 511, 00:18:51.768 "buf_cache_size": 4294967295, 00:18:51.768 "dif_insert_or_strip": false, 00:18:51.768 "zcopy": false, 00:18:51.768 "c2h_success": false, 00:18:51.768 "sock_priority": 0, 00:18:51.768 "abort_timeout_sec": 1, 00:18:51.768 "ack_timeout": 0, 00:18:51.768 "data_wr_pool_size": 0 00:18:51.768 } 00:18:51.768 }, 00:18:51.768 { 00:18:51.768 "method": "nvmf_create_subsystem", 00:18:51.768 "params": { 00:18:51.768 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:51.768 "allow_any_host": false, 00:18:51.768 "serial_number": "00000000000000000000", 00:18:51.768 "model_number": "SPDK bdev Controller", 00:18:51.768 "max_namespaces": 32, 00:18:51.768 "min_cntlid": 1, 00:18:51.768 "max_cntlid": 65519, 00:18:51.768 "ana_reporting": false 00:18:51.768 } 00:18:51.768 }, 00:18:51.768 { 00:18:51.768 "method": "nvmf_subsystem_add_host", 00:18:51.768 "params": { 00:18:51.768 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:51.768 "host": "nqn.2016-06.io.spdk:host1", 00:18:51.768 "psk": "key0" 00:18:51.768 } 00:18:51.768 }, 00:18:51.768 { 00:18:51.768 "method": "nvmf_subsystem_add_ns", 00:18:51.768 "params": { 00:18:51.768 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:51.768 "namespace": { 00:18:51.768 "nsid": 1, 00:18:51.768 "bdev_name": "malloc0", 00:18:51.768 "nguid": "C22DFC89D43F44248A251E033B0F5788", 00:18:51.768 "uuid": "c22dfc89-d43f-4424-8a25-1e033b0f5788", 00:18:51.768 "no_auto_visible": false 00:18:51.768 } 00:18:51.768 } 00:18:51.768 }, 00:18:51.768 { 00:18:51.768 "method": "nvmf_subsystem_add_listener", 00:18:51.768 "params": { 00:18:51.768 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:51.768 "listen_address": { 00:18:51.768 "trtype": "TCP", 00:18:51.768 "adrfam": "IPv4", 00:18:51.768 "traddr": "10.0.0.2", 00:18:51.768 "trsvcid": "4420" 00:18:51.768 }, 00:18:51.768 "secure_channel": false, 00:18:51.768 "sock_impl": "ssl" 00:18:51.768 } 00:18:51.768 } 00:18:51.768 ] 00:18:51.768 } 00:18:51.768 ] 00:18:51.768 }' 00:18:51.768 14:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:52.027 14:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:18:52.027 "subsystems": [ 00:18:52.027 { 00:18:52.027 "subsystem": "keyring", 00:18:52.027 "config": [ 00:18:52.027 { 00:18:52.027 "method": "keyring_file_add_key", 00:18:52.027 "params": { 00:18:52.027 "name": "key0", 00:18:52.027 "path": "/tmp/tmp.hUxNXkm2GK" 00:18:52.027 } 00:18:52.027 } 00:18:52.027 ] 00:18:52.027 }, 00:18:52.027 { 00:18:52.027 "subsystem": "iobuf", 00:18:52.027 "config": [ 00:18:52.027 { 00:18:52.027 "method": "iobuf_set_options", 00:18:52.027 "params": { 00:18:52.027 "small_pool_count": 8192, 00:18:52.027 "large_pool_count": 1024, 00:18:52.027 "small_bufsize": 8192, 00:18:52.027 "large_bufsize": 135168, 00:18:52.027 "enable_numa": false 00:18:52.027 } 00:18:52.027 } 00:18:52.027 ] 00:18:52.027 }, 00:18:52.027 { 00:18:52.027 "subsystem": "sock", 00:18:52.027 "config": [ 00:18:52.027 { 00:18:52.027 "method": "sock_set_default_impl", 00:18:52.027 "params": { 00:18:52.027 "impl_name": "posix" 00:18:52.027 } 00:18:52.027 }, 00:18:52.027 { 00:18:52.027 "method": "sock_impl_set_options", 00:18:52.027 "params": { 00:18:52.027 "impl_name": "ssl", 00:18:52.027 "recv_buf_size": 4096, 00:18:52.027 "send_buf_size": 4096, 00:18:52.027 "enable_recv_pipe": true, 00:18:52.027 "enable_quickack": false, 00:18:52.027 "enable_placement_id": 0, 00:18:52.027 "enable_zerocopy_send_server": true, 00:18:52.027 "enable_zerocopy_send_client": false, 00:18:52.027 "zerocopy_threshold": 0, 00:18:52.027 "tls_version": 0, 00:18:52.027 "enable_ktls": false 00:18:52.027 } 00:18:52.027 }, 00:18:52.027 { 00:18:52.027 "method": "sock_impl_set_options", 00:18:52.027 "params": { 00:18:52.027 "impl_name": "posix", 00:18:52.027 "recv_buf_size": 2097152, 00:18:52.027 "send_buf_size": 2097152, 00:18:52.027 "enable_recv_pipe": true, 00:18:52.027 "enable_quickack": false, 00:18:52.027 "enable_placement_id": 0, 00:18:52.027 "enable_zerocopy_send_server": true, 00:18:52.027 "enable_zerocopy_send_client": false, 00:18:52.027 "zerocopy_threshold": 0, 00:18:52.027 "tls_version": 0, 00:18:52.027 "enable_ktls": false 00:18:52.027 } 00:18:52.027 } 00:18:52.027 ] 00:18:52.027 }, 00:18:52.027 { 00:18:52.027 "subsystem": "vmd", 00:18:52.027 "config": [] 00:18:52.027 }, 00:18:52.027 { 00:18:52.027 "subsystem": "accel", 00:18:52.027 "config": [ 00:18:52.027 { 00:18:52.027 "method": "accel_set_options", 00:18:52.027 "params": { 00:18:52.027 "small_cache_size": 128, 00:18:52.027 "large_cache_size": 16, 00:18:52.027 "task_count": 2048, 00:18:52.027 "sequence_count": 2048, 00:18:52.027 "buf_count": 2048 00:18:52.027 } 00:18:52.027 } 00:18:52.027 ] 00:18:52.027 }, 00:18:52.027 { 00:18:52.027 "subsystem": "bdev", 00:18:52.027 "config": [ 00:18:52.027 { 00:18:52.027 "method": "bdev_set_options", 00:18:52.027 "params": { 00:18:52.027 "bdev_io_pool_size": 65535, 00:18:52.027 "bdev_io_cache_size": 256, 00:18:52.027 "bdev_auto_examine": true, 00:18:52.027 "iobuf_small_cache_size": 128, 00:18:52.027 "iobuf_large_cache_size": 16 00:18:52.027 } 00:18:52.027 }, 00:18:52.027 { 00:18:52.027 "method": "bdev_raid_set_options", 00:18:52.027 "params": { 00:18:52.027 "process_window_size_kb": 1024, 00:18:52.027 "process_max_bandwidth_mb_sec": 0 00:18:52.027 } 00:18:52.027 }, 00:18:52.027 { 00:18:52.027 "method": "bdev_iscsi_set_options", 00:18:52.027 "params": { 00:18:52.027 "timeout_sec": 30 00:18:52.027 } 00:18:52.027 }, 00:18:52.027 { 00:18:52.027 "method": "bdev_nvme_set_options", 00:18:52.027 "params": { 00:18:52.027 "action_on_timeout": "none", 00:18:52.027 "timeout_us": 0, 00:18:52.027 "timeout_admin_us": 0, 00:18:52.027 "keep_alive_timeout_ms": 10000, 00:18:52.027 "arbitration_burst": 0, 00:18:52.027 "low_priority_weight": 0, 00:18:52.027 "medium_priority_weight": 0, 00:18:52.027 "high_priority_weight": 0, 00:18:52.027 "nvme_adminq_poll_period_us": 10000, 00:18:52.027 "nvme_ioq_poll_period_us": 0, 00:18:52.027 "io_queue_requests": 512, 00:18:52.027 "delay_cmd_submit": true, 00:18:52.027 "transport_retry_count": 4, 00:18:52.027 "bdev_retry_count": 3, 00:18:52.027 "transport_ack_timeout": 0, 00:18:52.027 "ctrlr_loss_timeout_sec": 0, 00:18:52.027 "reconnect_delay_sec": 0, 00:18:52.027 "fast_io_fail_timeout_sec": 0, 00:18:52.027 "disable_auto_failback": false, 00:18:52.027 "generate_uuids": false, 00:18:52.027 "transport_tos": 0, 00:18:52.027 "nvme_error_stat": false, 00:18:52.027 "rdma_srq_size": 0, 00:18:52.027 "io_path_stat": false, 00:18:52.027 "allow_accel_sequence": false, 00:18:52.027 "rdma_max_cq_size": 0, 00:18:52.027 "rdma_cm_event_timeout_ms": 0, 00:18:52.027 "dhchap_digests": [ 00:18:52.027 "sha256", 00:18:52.027 "sha384", 00:18:52.027 "sha512" 00:18:52.027 ], 00:18:52.027 "dhchap_dhgroups": [ 00:18:52.027 "null", 00:18:52.027 "ffdhe2048", 00:18:52.027 "ffdhe3072", 00:18:52.027 "ffdhe4096", 00:18:52.027 "ffdhe6144", 00:18:52.027 "ffdhe8192" 00:18:52.027 ], 00:18:52.027 "rdma_umr_per_io": false 00:18:52.027 } 00:18:52.027 }, 00:18:52.027 { 00:18:52.027 "method": "bdev_nvme_attach_controller", 00:18:52.027 "params": { 00:18:52.027 "name": "nvme0", 00:18:52.027 "trtype": "TCP", 00:18:52.027 "adrfam": "IPv4", 00:18:52.027 "traddr": "10.0.0.2", 00:18:52.027 "trsvcid": "4420", 00:18:52.027 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:52.027 "prchk_reftag": false, 00:18:52.027 "prchk_guard": false, 00:18:52.027 "ctrlr_loss_timeout_sec": 0, 00:18:52.027 "reconnect_delay_sec": 0, 00:18:52.027 "fast_io_fail_timeout_sec": 0, 00:18:52.027 "psk": "key0", 00:18:52.027 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:52.027 "hdgst": false, 00:18:52.027 "ddgst": false, 00:18:52.027 "multipath": "multipath" 00:18:52.027 } 00:18:52.027 }, 00:18:52.027 { 00:18:52.027 "method": "bdev_nvme_set_hotplug", 00:18:52.027 "params": { 00:18:52.027 "period_us": 100000, 00:18:52.027 "enable": false 00:18:52.027 } 00:18:52.027 }, 00:18:52.027 { 00:18:52.027 "method": "bdev_enable_histogram", 00:18:52.027 "params": { 00:18:52.027 "name": "nvme0n1", 00:18:52.027 "enable": true 00:18:52.027 } 00:18:52.027 }, 00:18:52.027 { 00:18:52.027 "method": "bdev_wait_for_examine" 00:18:52.027 } 00:18:52.027 ] 00:18:52.027 }, 00:18:52.027 { 00:18:52.027 "subsystem": "nbd", 00:18:52.027 "config": [] 00:18:52.027 } 00:18:52.027 ] 00:18:52.027 }' 00:18:52.027 14:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 695887 00:18:52.027 14:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 695887 ']' 00:18:52.027 14:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 695887 00:18:52.027 14:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:52.027 14:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:52.027 14:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 695887 00:18:52.027 14:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:52.027 14:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:52.027 14:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 695887' 00:18:52.027 killing process with pid 695887 00:18:52.027 14:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 695887 00:18:52.027 Received shutdown signal, test time was about 1.000000 seconds 00:18:52.027 00:18:52.028 Latency(us) 00:18:52.028 [2024-12-11T13:55:34.801Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:52.028 [2024-12-11T13:55:34.801Z] =================================================================================================================== 00:18:52.028 [2024-12-11T13:55:34.801Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:52.028 14:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 695887 00:18:52.287 14:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 695862 00:18:52.287 14:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 695862 ']' 00:18:52.287 14:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 695862 00:18:52.287 14:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:52.287 14:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:52.287 14:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 695862 00:18:52.287 14:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:52.287 14:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:52.288 14:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 695862' 00:18:52.288 killing process with pid 695862 00:18:52.288 14:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 695862 00:18:52.288 14:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 695862 00:18:52.546 14:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:18:52.546 14:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:52.546 14:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:18:52.546 "subsystems": [ 00:18:52.546 { 00:18:52.546 "subsystem": "keyring", 00:18:52.546 "config": [ 00:18:52.546 { 00:18:52.546 "method": "keyring_file_add_key", 00:18:52.546 "params": { 00:18:52.546 "name": "key0", 00:18:52.546 "path": "/tmp/tmp.hUxNXkm2GK" 00:18:52.546 } 00:18:52.546 } 00:18:52.546 ] 00:18:52.546 }, 00:18:52.546 { 00:18:52.546 "subsystem": "iobuf", 00:18:52.546 "config": [ 00:18:52.546 { 00:18:52.546 "method": "iobuf_set_options", 00:18:52.546 "params": { 00:18:52.546 "small_pool_count": 8192, 00:18:52.546 "large_pool_count": 1024, 00:18:52.546 "small_bufsize": 8192, 00:18:52.546 "large_bufsize": 135168, 00:18:52.546 "enable_numa": false 00:18:52.546 } 00:18:52.546 } 00:18:52.546 ] 00:18:52.546 }, 00:18:52.546 { 00:18:52.546 "subsystem": "sock", 00:18:52.546 "config": [ 00:18:52.546 { 00:18:52.546 "method": "sock_set_default_impl", 00:18:52.546 "params": { 00:18:52.546 "impl_name": "posix" 00:18:52.546 } 00:18:52.546 }, 00:18:52.546 { 00:18:52.546 "method": "sock_impl_set_options", 00:18:52.546 "params": { 00:18:52.546 "impl_name": "ssl", 00:18:52.546 "recv_buf_size": 4096, 00:18:52.546 "send_buf_size": 4096, 00:18:52.546 "enable_recv_pipe": true, 00:18:52.546 "enable_quickack": false, 00:18:52.546 "enable_placement_id": 0, 00:18:52.546 "enable_zerocopy_send_server": true, 00:18:52.546 "enable_zerocopy_send_client": false, 00:18:52.546 "zerocopy_threshold": 0, 00:18:52.546 "tls_version": 0, 00:18:52.546 "enable_ktls": false 00:18:52.546 } 00:18:52.546 }, 00:18:52.546 { 00:18:52.546 "method": "sock_impl_set_options", 00:18:52.546 "params": { 00:18:52.546 "impl_name": "posix", 00:18:52.546 "recv_buf_size": 2097152, 00:18:52.546 "send_buf_size": 2097152, 00:18:52.546 "enable_recv_pipe": true, 00:18:52.546 "enable_quickack": false, 00:18:52.546 "enable_placement_id": 0, 00:18:52.546 "enable_zerocopy_send_server": true, 00:18:52.546 "enable_zerocopy_send_client": false, 00:18:52.546 "zerocopy_threshold": 0, 00:18:52.546 "tls_version": 0, 00:18:52.546 "enable_ktls": false 00:18:52.546 } 00:18:52.546 } 00:18:52.546 ] 00:18:52.546 }, 00:18:52.546 { 00:18:52.546 "subsystem": "vmd", 00:18:52.546 "config": [] 00:18:52.546 }, 00:18:52.546 { 00:18:52.546 "subsystem": "accel", 00:18:52.546 "config": [ 00:18:52.546 { 00:18:52.546 "method": "accel_set_options", 00:18:52.546 "params": { 00:18:52.546 "small_cache_size": 128, 00:18:52.546 "large_cache_size": 16, 00:18:52.546 "task_count": 2048, 00:18:52.546 "sequence_count": 2048, 00:18:52.546 "buf_count": 2048 00:18:52.546 } 00:18:52.546 } 00:18:52.546 ] 00:18:52.546 }, 00:18:52.546 { 00:18:52.546 "subsystem": "bdev", 00:18:52.546 "config": [ 00:18:52.546 { 00:18:52.547 "method": "bdev_set_options", 00:18:52.547 "params": { 00:18:52.547 "bdev_io_pool_size": 65535, 00:18:52.547 "bdev_io_cache_size": 256, 00:18:52.547 "bdev_auto_examine": true, 00:18:52.547 "iobuf_small_cache_size": 128, 00:18:52.547 "iobuf_large_cache_size": 16 00:18:52.547 } 00:18:52.547 }, 00:18:52.547 { 00:18:52.547 "method": "bdev_raid_set_options", 00:18:52.547 "params": { 00:18:52.547 "process_window_size_kb": 1024, 00:18:52.547 "process_max_bandwidth_mb_sec": 0 00:18:52.547 } 00:18:52.547 }, 00:18:52.547 { 00:18:52.547 "method": "bdev_iscsi_set_options", 00:18:52.547 "params": { 00:18:52.547 "timeout_sec": 30 00:18:52.547 } 00:18:52.547 }, 00:18:52.547 { 00:18:52.547 "method": "bdev_nvme_set_options", 00:18:52.547 "params": { 00:18:52.547 "action_on_timeout": "none", 00:18:52.547 "timeout_us": 0, 00:18:52.547 "timeout_admin_us": 0, 00:18:52.547 "keep_alive_timeout_ms": 10000, 00:18:52.547 "arbitration_burst": 0, 00:18:52.547 "low_priority_weight": 0, 00:18:52.547 "medium_priority_weight": 0, 00:18:52.547 "high_priority_weight": 0, 00:18:52.547 "nvme_adminq_poll_period_us": 10000, 00:18:52.547 "nvme_ioq_poll_period_us": 0, 00:18:52.547 "io_queue_requests": 0, 00:18:52.547 "delay_cmd_submit": true, 00:18:52.547 "transport_retry_count": 4, 00:18:52.547 "bdev_retry_count": 3, 00:18:52.547 "transport_ack_timeout": 0, 00:18:52.547 "ctrlr_loss_timeout_sec": 0, 00:18:52.547 "reconnect_delay_sec": 0, 00:18:52.547 "fast_io_fail_timeout_sec": 0, 00:18:52.547 "disable_auto_failback": false, 00:18:52.547 "generate_uuids": false, 00:18:52.547 "transport_tos": 0, 00:18:52.547 "nvme_error_stat": false, 00:18:52.547 "rdma_srq_size": 0, 00:18:52.547 "io_path_stat": false, 00:18:52.547 "allow_accel_sequence": false, 00:18:52.547 "rdma_max_cq_size": 0, 00:18:52.547 "rdma_cm_event_timeout_ms": 0, 00:18:52.547 "dhchap_digests": [ 00:18:52.547 "sha256", 00:18:52.547 "sha384", 00:18:52.547 "sha512" 00:18:52.547 ], 00:18:52.547 "dhchap_dhgroups": [ 00:18:52.547 "null", 00:18:52.547 "ffdhe2048", 00:18:52.547 "ffdhe3072", 00:18:52.547 "ffdhe4096", 00:18:52.547 "ffdhe6144", 00:18:52.547 "ffdhe8192" 00:18:52.547 ], 00:18:52.547 "rdma_umr_per_io": false 00:18:52.547 } 00:18:52.547 }, 00:18:52.547 { 00:18:52.547 "method": "bdev_nvme_set_hotplug", 00:18:52.547 "params": { 00:18:52.547 "period_us": 100000, 00:18:52.547 "enable": false 00:18:52.547 } 00:18:52.547 }, 00:18:52.547 { 00:18:52.547 "method": "bdev_malloc_create", 00:18:52.547 "params": { 00:18:52.547 "name": "malloc0", 00:18:52.547 "num_blocks": 8192, 00:18:52.547 "block_size": 4096, 00:18:52.547 "physical_block_size": 4096, 00:18:52.547 "uuid": "c22dfc89-d43f-4424-8a25-1e033b0f5788", 00:18:52.547 "optimal_io_boundary": 0, 00:18:52.547 "md_size": 0, 00:18:52.547 "dif_type": 0, 00:18:52.547 "dif_is_head_of_md": false, 00:18:52.547 "dif_pi_format": 0 00:18:52.547 } 00:18:52.547 }, 00:18:52.547 { 00:18:52.547 "method": "bdev_wait_for_examine" 00:18:52.547 } 00:18:52.547 ] 00:18:52.547 }, 00:18:52.547 { 00:18:52.547 "subsystem": "nbd", 00:18:52.547 "config": [] 00:18:52.547 }, 00:18:52.547 { 00:18:52.547 "subsystem": "scheduler", 00:18:52.547 "config": [ 00:18:52.547 { 00:18:52.547 "method": "framework_set_scheduler", 00:18:52.547 "params": { 00:18:52.547 "name": "static" 00:18:52.547 } 00:18:52.547 } 00:18:52.547 ] 00:18:52.547 }, 00:18:52.547 { 00:18:52.547 "subsystem": "nvmf", 00:18:52.547 "config": [ 00:18:52.547 { 00:18:52.547 "method": "nvmf_set_config", 00:18:52.547 "params": { 00:18:52.547 "discovery_filter": "match_any", 00:18:52.547 "admin_cmd_passthru": { 00:18:52.547 "identify_ctrlr": false 00:18:52.547 }, 00:18:52.547 "dhchap_digests": [ 00:18:52.547 "sha256", 00:18:52.547 "sha384", 00:18:52.547 "sha512" 00:18:52.547 ], 00:18:52.547 "dhchap_dhgroups": [ 00:18:52.547 "null", 00:18:52.547 "ffdhe2048", 00:18:52.547 "ffdhe3072", 00:18:52.547 "ffdhe4096", 00:18:52.547 "ffdhe6144", 00:18:52.547 "ffdhe8192" 00:18:52.547 ] 00:18:52.547 } 00:18:52.547 }, 00:18:52.547 { 00:18:52.547 "method": "nvmf_set_max_subsystems", 00:18:52.547 "params": { 00:18:52.547 "max_subsystems": 1024 00:18:52.547 } 00:18:52.547 }, 00:18:52.547 { 00:18:52.547 "method": "nvmf_set_crdt", 00:18:52.547 "params": { 00:18:52.547 "crdt1": 0, 00:18:52.547 "crdt2": 0, 00:18:52.547 "crdt3": 0 00:18:52.547 } 00:18:52.547 }, 00:18:52.547 { 00:18:52.547 "method": "nvmf_create_transport", 00:18:52.547 "params": { 00:18:52.547 "trtype": "TCP", 00:18:52.547 "max_queue_depth": 128, 00:18:52.547 "max_io_qpairs_per_ctrlr": 127, 00:18:52.547 "in_capsule_data_size": 4096, 00:18:52.547 "max_io_size": 131072, 00:18:52.547 "io_unit_size": 131072, 00:18:52.547 "max_aq_depth": 128, 00:18:52.547 "num_shared_buffers": 511, 00:18:52.547 "buf_cache_size": 4294967295, 00:18:52.547 "dif_insert_or_strip": false, 00:18:52.547 "zcopy": false, 00:18:52.547 "c2h_success": false, 00:18:52.547 "sock_priority": 0, 00:18:52.547 "abort_timeout_sec": 1, 00:18:52.547 "ack_timeout": 0, 00:18:52.547 "data_wr_pool_size": 0 00:18:52.547 } 00:18:52.547 }, 00:18:52.547 { 00:18:52.547 "method": "nvmf_create_subsystem", 00:18:52.547 "params": { 00:18:52.547 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:52.547 "allow_any_host": false, 00:18:52.547 "serial_number": "00000000000000000000", 00:18:52.547 "model_number": "SPDK bdev Controller", 00:18:52.547 "max_namespaces": 32, 00:18:52.547 "min_cntlid": 1, 00:18:52.547 "max_cntlid": 65519, 00:18:52.547 "ana_reporting": false 00:18:52.547 } 00:18:52.547 }, 00:18:52.547 { 00:18:52.547 "method": "nvmf_subsystem_add_host", 00:18:52.547 "params": { 00:18:52.547 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:52.547 "host": "nqn.2016-06.io.spdk:host1", 00:18:52.547 "psk": "key0" 00:18:52.547 } 00:18:52.547 }, 00:18:52.547 { 00:18:52.547 "method": "nvmf_subsystem_add_ns", 00:18:52.547 "params": { 00:18:52.547 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:52.547 "namespace": { 00:18:52.547 "nsid": 1, 00:18:52.547 "bdev_name": "malloc0", 00:18:52.547 "nguid": "C22DFC89D43F44248A251E033B0F5788", 00:18:52.547 "uuid": "c22dfc89-d43f-4424-8a25-1e033b0f5788", 00:18:52.547 "no_auto_visible": false 00:18:52.547 } 00:18:52.547 } 00:18:52.547 }, 00:18:52.547 { 00:18:52.547 "method": "nvmf_subsystem_add_listener", 00:18:52.547 "params": { 00:18:52.547 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:52.547 "listen_address": { 00:18:52.547 "trtype": "TCP", 00:18:52.547 "adrfam": "IPv4", 00:18:52.547 "traddr": "10.0.0.2", 00:18:52.547 "trsvcid": "4420" 00:18:52.547 }, 00:18:52.547 "secure_channel": false, 00:18:52.547 "sock_impl": "ssl" 00:18:52.547 } 00:18:52.547 } 00:18:52.547 ] 00:18:52.547 } 00:18:52.547 ] 00:18:52.547 }' 00:18:52.547 14:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:52.547 14:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:52.547 14:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=696289 00:18:52.547 14:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:18:52.547 14:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 696289 00:18:52.547 14:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 696289 ']' 00:18:52.547 14:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:52.547 14:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:52.547 14:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:52.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:52.547 14:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:52.547 14:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:52.547 [2024-12-11 14:55:35.222107] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:18:52.547 [2024-12-11 14:55:35.222198] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:52.547 [2024-12-11 14:55:35.295757] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:52.805 [2024-12-11 14:55:35.349269] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:52.805 [2024-12-11 14:55:35.349330] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:52.805 [2024-12-11 14:55:35.349358] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:52.805 [2024-12-11 14:55:35.349368] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:52.805 [2024-12-11 14:55:35.349377] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:52.805 [2024-12-11 14:55:35.350031] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:53.065 [2024-12-11 14:55:35.585117] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:53.065 [2024-12-11 14:55:35.617151] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:53.065 [2024-12-11 14:55:35.617388] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:53.633 14:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:53.633 14:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:53.633 14:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:53.633 14:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:53.633 14:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:53.633 14:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:53.633 14:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=696440 00:18:53.633 14:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 696440 /var/tmp/bdevperf.sock 00:18:53.633 14:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 696440 ']' 00:18:53.633 14:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:53.633 14:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:18:53.633 14:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:53.633 14:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:18:53.633 "subsystems": [ 00:18:53.633 { 00:18:53.633 "subsystem": "keyring", 00:18:53.633 "config": [ 00:18:53.633 { 00:18:53.633 "method": "keyring_file_add_key", 00:18:53.633 "params": { 00:18:53.633 "name": "key0", 00:18:53.633 "path": "/tmp/tmp.hUxNXkm2GK" 00:18:53.633 } 00:18:53.633 } 00:18:53.633 ] 00:18:53.633 }, 00:18:53.633 { 00:18:53.633 "subsystem": "iobuf", 00:18:53.633 "config": [ 00:18:53.633 { 00:18:53.633 "method": "iobuf_set_options", 00:18:53.633 "params": { 00:18:53.633 "small_pool_count": 8192, 00:18:53.633 "large_pool_count": 1024, 00:18:53.633 "small_bufsize": 8192, 00:18:53.633 "large_bufsize": 135168, 00:18:53.633 "enable_numa": false 00:18:53.633 } 00:18:53.633 } 00:18:53.633 ] 00:18:53.633 }, 00:18:53.633 { 00:18:53.633 "subsystem": "sock", 00:18:53.633 "config": [ 00:18:53.633 { 00:18:53.633 "method": "sock_set_default_impl", 00:18:53.633 "params": { 00:18:53.633 "impl_name": "posix" 00:18:53.633 } 00:18:53.633 }, 00:18:53.633 { 00:18:53.633 "method": "sock_impl_set_options", 00:18:53.633 "params": { 00:18:53.633 "impl_name": "ssl", 00:18:53.633 "recv_buf_size": 4096, 00:18:53.633 "send_buf_size": 4096, 00:18:53.633 "enable_recv_pipe": true, 00:18:53.633 "enable_quickack": false, 00:18:53.633 "enable_placement_id": 0, 00:18:53.633 "enable_zerocopy_send_server": true, 00:18:53.633 "enable_zerocopy_send_client": false, 00:18:53.633 "zerocopy_threshold": 0, 00:18:53.633 "tls_version": 0, 00:18:53.633 "enable_ktls": false 00:18:53.633 } 00:18:53.633 }, 00:18:53.633 { 00:18:53.633 "method": "sock_impl_set_options", 00:18:53.633 "params": { 00:18:53.633 "impl_name": "posix", 00:18:53.633 "recv_buf_size": 2097152, 00:18:53.633 "send_buf_size": 2097152, 00:18:53.633 "enable_recv_pipe": true, 00:18:53.633 "enable_quickack": false, 00:18:53.633 "enable_placement_id": 0, 00:18:53.633 "enable_zerocopy_send_server": true, 00:18:53.633 "enable_zerocopy_send_client": false, 00:18:53.633 "zerocopy_threshold": 0, 00:18:53.633 "tls_version": 0, 00:18:53.633 "enable_ktls": false 00:18:53.633 } 00:18:53.633 } 00:18:53.633 ] 00:18:53.633 }, 00:18:53.633 { 00:18:53.633 "subsystem": "vmd", 00:18:53.633 "config": [] 00:18:53.633 }, 00:18:53.633 { 00:18:53.633 "subsystem": "accel", 00:18:53.633 "config": [ 00:18:53.633 { 00:18:53.633 "method": "accel_set_options", 00:18:53.633 "params": { 00:18:53.633 "small_cache_size": 128, 00:18:53.633 "large_cache_size": 16, 00:18:53.633 "task_count": 2048, 00:18:53.633 "sequence_count": 2048, 00:18:53.633 "buf_count": 2048 00:18:53.633 } 00:18:53.633 } 00:18:53.633 ] 00:18:53.633 }, 00:18:53.633 { 00:18:53.633 "subsystem": "bdev", 00:18:53.633 "config": [ 00:18:53.633 { 00:18:53.633 "method": "bdev_set_options", 00:18:53.633 "params": { 00:18:53.633 "bdev_io_pool_size": 65535, 00:18:53.633 "bdev_io_cache_size": 256, 00:18:53.633 "bdev_auto_examine": true, 00:18:53.633 "iobuf_small_cache_size": 128, 00:18:53.633 "iobuf_large_cache_size": 16 00:18:53.633 } 00:18:53.633 }, 00:18:53.633 { 00:18:53.633 "method": "bdev_raid_set_options", 00:18:53.633 "params": { 00:18:53.633 "process_window_size_kb": 1024, 00:18:53.633 "process_max_bandwidth_mb_sec": 0 00:18:53.633 } 00:18:53.633 }, 00:18:53.633 { 00:18:53.633 "method": "bdev_iscsi_set_options", 00:18:53.633 "params": { 00:18:53.633 "timeout_sec": 30 00:18:53.633 } 00:18:53.633 }, 00:18:53.633 { 00:18:53.633 "method": "bdev_nvme_set_options", 00:18:53.633 "params": { 00:18:53.633 "action_on_timeout": "none", 00:18:53.633 "timeout_us": 0, 00:18:53.633 "timeout_admin_us": 0, 00:18:53.633 "keep_alive_timeout_ms": 10000, 00:18:53.633 "arbitration_burst": 0, 00:18:53.633 "low_priority_weight": 0, 00:18:53.633 "medium_priority_weight": 0, 00:18:53.633 "high_priority_weight": 0, 00:18:53.633 "nvme_adminq_poll_period_us": 10000, 00:18:53.633 "nvme_ioq_poll_period_us": 0, 00:18:53.633 "io_queue_requests": 512, 00:18:53.633 "delay_cmd_submit": true, 00:18:53.633 "transport_retry_count": 4, 00:18:53.633 "bdev_retry_count": 3, 00:18:53.633 "transport_ack_timeout": 0, 00:18:53.633 "ctrlr_loss_timeout_sec": 0, 00:18:53.633 "reconnect_delay_sec": 0, 00:18:53.633 "fast_io_fail_timeout_sec": 0, 00:18:53.633 "disable_auto_failback": false, 00:18:53.633 "generate_uuids": false, 00:18:53.633 "transport_tos": 0, 00:18:53.633 "nvme_error_stat": false, 00:18:53.633 "rdma_srq_size": 0, 00:18:53.633 "io_path_stat": false, 00:18:53.633 "allow_accel_sequence": false, 00:18:53.633 "rdma_max_cq_size": 0, 00:18:53.633 "rdma_cm_event_timeout_ms": 0 14:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:53.633 , 00:18:53.633 "dhchap_digests": [ 00:18:53.633 "sha256", 00:18:53.633 "sha384", 00:18:53.633 "sha512" 00:18:53.633 ], 00:18:53.633 "dhchap_dhgroups": [ 00:18:53.633 "null", 00:18:53.633 "ffdhe2048", 00:18:53.633 "ffdhe3072", 00:18:53.633 "ffdhe4096", 00:18:53.633 "ffdhe6144", 00:18:53.633 "ffdhe8192" 00:18:53.633 ], 00:18:53.633 "rdma_umr_per_io": false 00:18:53.633 } 00:18:53.633 }, 00:18:53.633 { 00:18:53.633 "method": "bdev_nvme_attach_controller", 00:18:53.633 "params": { 00:18:53.633 "name": "nvme0", 00:18:53.633 "trtype": "TCP", 00:18:53.633 "adrfam": "IPv4", 00:18:53.633 "traddr": "10.0.0.2", 00:18:53.633 "trsvcid": "4420", 00:18:53.633 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:53.633 "prchk_reftag": false, 00:18:53.633 "prchk_guard": false, 00:18:53.633 "ctrlr_loss_timeout_sec": 0, 00:18:53.633 "reconnect_delay_sec": 0, 00:18:53.633 "fast_io_fail_timeout_sec": 0, 00:18:53.633 "psk": "key0", 00:18:53.633 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:53.633 "hdgst": false, 00:18:53.633 "ddgst": false, 00:18:53.633 "multipath": "multipath" 00:18:53.633 } 00:18:53.633 }, 00:18:53.633 { 00:18:53.633 "method": "bdev_nvme_set_hotplug", 00:18:53.633 "params": { 00:18:53.633 "period_us": 100000, 00:18:53.633 "enable": false 00:18:53.633 } 00:18:53.633 }, 00:18:53.633 { 00:18:53.633 "method": "bdev_enable_histogram", 00:18:53.633 "params": { 00:18:53.633 "name": "nvme0n1", 00:18:53.633 "enable": true 00:18:53.633 } 00:18:53.633 }, 00:18:53.633 { 00:18:53.633 "method": "bdev_wait_for_examine" 00:18:53.633 } 00:18:53.634 ] 00:18:53.634 }, 00:18:53.634 { 00:18:53.634 "subsystem": "nbd", 00:18:53.634 "config": [] 00:18:53.634 } 00:18:53.634 ] 00:18:53.634 }' 00:18:53.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:53.634 14:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:53.634 14:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:53.634 [2024-12-11 14:55:36.297480] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:18:53.634 [2024-12-11 14:55:36.297606] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid696440 ] 00:18:53.634 [2024-12-11 14:55:36.363101] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:53.893 [2024-12-11 14:55:36.420469] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:53.893 [2024-12-11 14:55:36.597615] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:54.151 14:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:54.151 14:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:54.151 14:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:54.151 14:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:18:54.409 14:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.409 14:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:54.409 Running I/O for 1 seconds... 00:18:55.607 3216.00 IOPS, 12.56 MiB/s 00:18:55.607 Latency(us) 00:18:55.607 [2024-12-11T13:55:38.380Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:55.607 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:55.607 Verification LBA range: start 0x0 length 0x2000 00:18:55.607 nvme0n1 : 1.02 3282.09 12.82 0.00 0.00 38616.61 7718.68 42719.76 00:18:55.607 [2024-12-11T13:55:38.380Z] =================================================================================================================== 00:18:55.607 [2024-12-11T13:55:38.380Z] Total : 3282.09 12.82 0.00 0.00 38616.61 7718.68 42719.76 00:18:55.607 { 00:18:55.607 "results": [ 00:18:55.607 { 00:18:55.607 "job": "nvme0n1", 00:18:55.607 "core_mask": "0x2", 00:18:55.607 "workload": "verify", 00:18:55.607 "status": "finished", 00:18:55.607 "verify_range": { 00:18:55.607 "start": 0, 00:18:55.607 "length": 8192 00:18:55.607 }, 00:18:55.607 "queue_depth": 128, 00:18:55.607 "io_size": 4096, 00:18:55.607 "runtime": 1.019167, 00:18:55.607 "iops": 3282.092139953511, 00:18:55.607 "mibps": 12.820672421693402, 00:18:55.607 "io_failed": 0, 00:18:55.607 "io_timeout": 0, 00:18:55.607 "avg_latency_us": 38616.606357747885, 00:18:55.607 "min_latency_us": 7718.684444444444, 00:18:55.607 "max_latency_us": 42719.76296296297 00:18:55.607 } 00:18:55.607 ], 00:18:55.607 "core_count": 1 00:18:55.607 } 00:18:55.607 14:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:18:55.607 14:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:18:55.607 14:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:18:55.607 14:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:18:55.607 14:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:18:55.607 14:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:18:55.607 14:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:55.608 14:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:18:55.608 14:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:18:55.608 14:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:18:55.608 14:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:55.608 nvmf_trace.0 00:18:55.608 14:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:18:55.608 14:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 696440 00:18:55.608 14:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 696440 ']' 00:18:55.608 14:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 696440 00:18:55.608 14:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:55.608 14:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:55.608 14:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 696440 00:18:55.608 14:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:55.608 14:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:55.608 14:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 696440' 00:18:55.608 killing process with pid 696440 00:18:55.608 14:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 696440 00:18:55.608 Received shutdown signal, test time was about 1.000000 seconds 00:18:55.608 00:18:55.608 Latency(us) 00:18:55.608 [2024-12-11T13:55:38.381Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:55.608 [2024-12-11T13:55:38.381Z] =================================================================================================================== 00:18:55.608 [2024-12-11T13:55:38.381Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:55.608 14:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 696440 00:18:55.868 14:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:18:55.868 14:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:55.868 14:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:18:55.868 14:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:55.868 14:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:18:55.868 14:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:55.868 14:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:55.868 rmmod nvme_tcp 00:18:55.868 rmmod nvme_fabrics 00:18:55.868 rmmod nvme_keyring 00:18:55.868 14:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:55.868 14:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:18:55.868 14:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:18:55.868 14:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 696289 ']' 00:18:55.868 14:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 696289 00:18:55.868 14:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 696289 ']' 00:18:55.868 14:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 696289 00:18:55.868 14:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:55.868 14:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:55.868 14:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 696289 00:18:55.868 14:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:55.868 14:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:55.868 14:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 696289' 00:18:55.868 killing process with pid 696289 00:18:55.868 14:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 696289 00:18:55.868 14:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 696289 00:18:56.127 14:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:56.127 14:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:56.127 14:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:56.127 14:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:18:56.127 14:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:18:56.127 14:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:56.127 14:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:18:56.127 14:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:56.127 14:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:56.127 14:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:56.127 14:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:56.127 14:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:58.665 14:55:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:58.665 14:55:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.NZYQamyXoG /tmp/tmp.LOM2WWJEbN /tmp/tmp.hUxNXkm2GK 00:18:58.665 00:18:58.665 real 1m23.651s 00:18:58.665 user 2m21.450s 00:18:58.665 sys 0m24.646s 00:18:58.665 14:55:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:58.665 14:55:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:58.665 ************************************ 00:18:58.665 END TEST nvmf_tls 00:18:58.665 ************************************ 00:18:58.665 14:55:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:58.665 14:55:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:58.665 14:55:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:58.665 14:55:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:58.665 ************************************ 00:18:58.665 START TEST nvmf_fips 00:18:58.665 ************************************ 00:18:58.665 14:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:58.665 * Looking for test storage... 00:18:58.665 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:18:58.665 14:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:58.665 14:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lcov --version 00:18:58.665 14:55:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:58.665 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:58.665 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:58.665 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:58.665 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:58.665 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:18:58.665 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:18:58.665 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:18:58.665 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:18:58.665 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:18:58.665 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:18:58.665 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:18:58.665 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:58.665 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:18:58.665 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:18:58.665 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:58.665 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:58.666 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:18:58.666 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:18:58.666 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:58.666 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:18:58.666 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:18:58.666 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:18:58.666 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:18:58.666 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:58.666 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:18:58.666 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:18:58.666 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:58.666 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:58.666 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:18:58.666 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:58.666 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:58.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:58.666 --rc genhtml_branch_coverage=1 00:18:58.666 --rc genhtml_function_coverage=1 00:18:58.666 --rc genhtml_legend=1 00:18:58.666 --rc geninfo_all_blocks=1 00:18:58.666 --rc geninfo_unexecuted_blocks=1 00:18:58.666 00:18:58.666 ' 00:18:58.666 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:58.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:58.666 --rc genhtml_branch_coverage=1 00:18:58.666 --rc genhtml_function_coverage=1 00:18:58.666 --rc genhtml_legend=1 00:18:58.666 --rc geninfo_all_blocks=1 00:18:58.666 --rc geninfo_unexecuted_blocks=1 00:18:58.666 00:18:58.666 ' 00:18:58.666 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:58.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:58.666 --rc genhtml_branch_coverage=1 00:18:58.666 --rc genhtml_function_coverage=1 00:18:58.666 --rc genhtml_legend=1 00:18:58.666 --rc geninfo_all_blocks=1 00:18:58.666 --rc geninfo_unexecuted_blocks=1 00:18:58.666 00:18:58.666 ' 00:18:58.666 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:58.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:58.666 --rc genhtml_branch_coverage=1 00:18:58.666 --rc genhtml_function_coverage=1 00:18:58.666 --rc genhtml_legend=1 00:18:58.666 --rc geninfo_all_blocks=1 00:18:58.666 --rc geninfo_unexecuted_blocks=1 00:18:58.666 00:18:58.666 ' 00:18:58.666 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:58.666 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:18:58.666 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:58.666 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:58.666 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:58.666 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:58.666 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:58.666 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:58.666 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:58.666 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:58.666 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:58.666 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:58.666 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:58.666 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:58.666 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:58.666 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:58.666 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:58.666 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:58.666 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:58.666 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:18:58.666 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:58.666 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:58.666 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:58.666 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:58.666 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:58.666 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:58.666 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:18:58.666 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:58.666 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:18:58.666 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:58.666 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:58.666 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:58.666 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:58.666 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:58.666 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:58.666 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:58.666 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:58.666 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:58.666 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:58.666 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:58.666 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:18:58.666 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:18:58.666 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:18:58.666 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:18:58.666 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:18:58.666 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:18:58.666 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:58.666 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:58.666 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:18:58.666 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:18:58.666 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:18:58.666 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:18:58.666 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:18:58.666 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:18:58.666 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:18:58.666 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:58.666 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:18:58.666 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:18:58.666 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:58.666 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:58.666 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:18:58.666 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:18:58.666 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:58.666 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:18:58.666 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:18:58.666 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:18:58.666 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:18:58.667 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:58.667 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:18:58.667 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:18:58.667 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:58.667 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:58.667 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:18:58.667 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:58.667 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:18:58.667 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:18:58.667 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:58.667 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:18:58.667 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:18:58.667 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:18:58.667 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:18:58.667 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:58.667 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:18:58.667 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:18:58.667 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:58.667 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:18:58.667 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:18:58.667 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:18:58.667 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:18:58.667 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:18:58.667 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:18:58.667 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:18:58.667 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:18:58.667 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:18:58.667 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:18:58.667 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:18:58.667 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:18:58.667 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:18:58.667 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:18:58.667 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:18:58.667 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:18:58.667 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:18:58.667 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:18:58.667 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:18:58.667 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:18:58.667 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:18:58.667 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:18:58.667 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:18:58.667 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:18:58.667 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:18:58.667 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:58.667 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:18:58.667 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:58.667 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:18:58.667 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:58.667 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:18:58.667 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:18:58.667 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:18:58.667 Error setting digest 00:18:58.667 4082127E077F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:18:58.667 4082127E077F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:18:58.667 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:18:58.667 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:58.667 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:58.667 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:58.667 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:18:58.667 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:58.667 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:58.667 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:58.667 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:58.667 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:58.667 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:58.667 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:58.667 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:58.667 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:58.667 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:58.667 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:18:58.667 14:55:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:01.200 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:01.200 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:19:01.200 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:01.200 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:01.200 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:01.200 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:01.200 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:01.200 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:19:01.200 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:01.200 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:19:01.200 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:19:01.200 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:19:01.200 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:19:01.200 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:19:01.200 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:19:01.200 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:01.200 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:01.200 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:01.200 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:01.200 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:01.200 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:01.200 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:01.200 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:01.200 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:01.200 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:01.200 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:01.200 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:01.201 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:01.201 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:01.201 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:01.201 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:01.201 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:01.201 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:01.201 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:01.201 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:01.201 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:01.201 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:01.201 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:01.201 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:01.201 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:01.201 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:01.201 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:01.201 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:01.201 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:01.201 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:01.201 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:01.201 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:01.201 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:01.201 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:01.201 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:01.201 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:01.201 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:01.201 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:01.201 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:01.201 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:01.201 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:01.201 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:01.201 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:01.201 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:01.201 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:01.201 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:01.201 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:01.201 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:01.201 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:01.201 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:01.201 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:01.201 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:01.201 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:01.201 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:01.201 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:01.201 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:01.201 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:01.201 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:01.201 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:19:01.201 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:01.201 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:01.201 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:01.201 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:01.201 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:01.201 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:01.201 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:01.201 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:01.201 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:01.201 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:01.201 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:01.201 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:01.201 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:01.201 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:01.201 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:01.201 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:01.201 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:01.201 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:01.201 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:01.201 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:01.201 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:01.201 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:01.201 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:01.201 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:01.201 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:01.201 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:01.201 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:01.201 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:19:01.201 00:19:01.201 --- 10.0.0.2 ping statistics --- 00:19:01.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:01.201 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:19:01.201 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:01.201 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:01.201 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.054 ms 00:19:01.201 00:19:01.201 --- 10.0.0.1 ping statistics --- 00:19:01.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:01.201 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:19:01.201 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:01.201 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:19:01.201 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:01.201 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:01.201 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:01.201 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:01.201 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:01.201 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:01.201 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:01.201 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:19:01.201 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:01.201 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:01.201 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:01.201 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=698690 00:19:01.201 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:01.201 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 698690 00:19:01.201 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 698690 ']' 00:19:01.201 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:01.201 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:01.201 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:01.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:01.201 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:01.201 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:01.202 [2024-12-11 14:55:43.666739] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:19:01.202 [2024-12-11 14:55:43.666820] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:01.202 [2024-12-11 14:55:43.738163] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:01.202 [2024-12-11 14:55:43.793681] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:01.202 [2024-12-11 14:55:43.793735] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:01.202 [2024-12-11 14:55:43.793764] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:01.202 [2024-12-11 14:55:43.793775] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:01.202 [2024-12-11 14:55:43.793785] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:01.202 [2024-12-11 14:55:43.794370] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:19:01.202 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:01.202 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:19:01.202 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:01.202 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:01.202 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:01.202 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:01.202 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:19:01.202 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:01.202 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:19:01.202 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.Ocv 00:19:01.202 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:01.202 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.Ocv 00:19:01.202 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.Ocv 00:19:01.202 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.Ocv 00:19:01.202 14:55:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:01.462 [2024-12-11 14:55:44.187720] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:01.462 [2024-12-11 14:55:44.203644] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:01.462 [2024-12-11 14:55:44.203889] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:01.721 malloc0 00:19:01.721 14:55:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:01.721 14:55:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=698833 00:19:01.721 14:55:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:01.721 14:55:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 698833 /var/tmp/bdevperf.sock 00:19:01.721 14:55:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 698833 ']' 00:19:01.721 14:55:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:01.721 14:55:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:01.721 14:55:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:01.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:01.721 14:55:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:01.721 14:55:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:01.721 [2024-12-11 14:55:44.331392] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:19:01.721 [2024-12-11 14:55:44.331478] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid698833 ] 00:19:01.721 [2024-12-11 14:55:44.397629] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:01.721 [2024-12-11 14:55:44.453705] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:19:01.979 14:55:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:01.979 14:55:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:19:01.979 14:55:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.Ocv 00:19:02.237 14:55:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:02.497 [2024-12-11 14:55:45.186163] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:02.497 TLSTESTn1 00:19:02.758 14:55:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:02.758 Running I/O for 10 seconds... 00:19:05.073 2795.00 IOPS, 10.92 MiB/s [2024-12-11T13:55:48.781Z] 2880.00 IOPS, 11.25 MiB/s [2024-12-11T13:55:49.718Z] 2905.00 IOPS, 11.35 MiB/s [2024-12-11T13:55:50.654Z] 2919.75 IOPS, 11.41 MiB/s [2024-12-11T13:55:51.591Z] 2918.60 IOPS, 11.40 MiB/s [2024-12-11T13:55:52.525Z] 2915.83 IOPS, 11.39 MiB/s [2024-12-11T13:55:53.463Z] 2923.86 IOPS, 11.42 MiB/s [2024-12-11T13:55:54.843Z] 2929.25 IOPS, 11.44 MiB/s [2024-12-11T13:55:55.785Z] 2929.56 IOPS, 11.44 MiB/s [2024-12-11T13:55:55.785Z] 2936.40 IOPS, 11.47 MiB/s 00:19:13.012 Latency(us) 00:19:13.012 [2024-12-11T13:55:55.785Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:13.012 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:13.012 Verification LBA range: start 0x0 length 0x2000 00:19:13.012 TLSTESTn1 : 10.02 2941.85 11.49 0.00 0.00 43428.38 7767.23 40972.14 00:19:13.012 [2024-12-11T13:55:55.785Z] =================================================================================================================== 00:19:13.012 [2024-12-11T13:55:55.785Z] Total : 2941.85 11.49 0.00 0.00 43428.38 7767.23 40972.14 00:19:13.012 { 00:19:13.012 "results": [ 00:19:13.012 { 00:19:13.012 "job": "TLSTESTn1", 00:19:13.012 "core_mask": "0x4", 00:19:13.012 "workload": "verify", 00:19:13.012 "status": "finished", 00:19:13.012 "verify_range": { 00:19:13.012 "start": 0, 00:19:13.012 "length": 8192 00:19:13.012 }, 00:19:13.012 "queue_depth": 128, 00:19:13.012 "io_size": 4096, 00:19:13.012 "runtime": 10.024661, 00:19:13.012 "iops": 2941.8451157600243, 00:19:13.012 "mibps": 11.491582483437595, 00:19:13.012 "io_failed": 0, 00:19:13.012 "io_timeout": 0, 00:19:13.012 "avg_latency_us": 43428.3751759545, 00:19:13.012 "min_latency_us": 7767.22962962963, 00:19:13.012 "max_latency_us": 40972.136296296296 00:19:13.012 } 00:19:13.012 ], 00:19:13.012 "core_count": 1 00:19:13.012 } 00:19:13.012 14:55:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:19:13.012 14:55:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:19:13.012 14:55:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:19:13.012 14:55:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:19:13.012 14:55:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:19:13.012 14:55:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:13.012 14:55:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:19:13.012 14:55:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:19:13.012 14:55:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:19:13.012 14:55:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:13.012 nvmf_trace.0 00:19:13.012 14:55:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:19:13.012 14:55:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 698833 00:19:13.012 14:55:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 698833 ']' 00:19:13.012 14:55:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 698833 00:19:13.012 14:55:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:19:13.012 14:55:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:13.012 14:55:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 698833 00:19:13.012 14:55:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:13.012 14:55:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:13.012 14:55:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 698833' 00:19:13.012 killing process with pid 698833 00:19:13.012 14:55:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 698833 00:19:13.012 Received shutdown signal, test time was about 10.000000 seconds 00:19:13.012 00:19:13.012 Latency(us) 00:19:13.012 [2024-12-11T13:55:55.785Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:13.012 [2024-12-11T13:55:55.785Z] =================================================================================================================== 00:19:13.012 [2024-12-11T13:55:55.785Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:13.012 14:55:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 698833 00:19:13.272 14:55:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:19:13.272 14:55:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:13.272 14:55:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:19:13.272 14:55:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:13.272 14:55:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:19:13.272 14:55:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:13.272 14:55:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:13.272 rmmod nvme_tcp 00:19:13.272 rmmod nvme_fabrics 00:19:13.272 rmmod nvme_keyring 00:19:13.272 14:55:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:13.272 14:55:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:19:13.272 14:55:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:19:13.272 14:55:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 698690 ']' 00:19:13.272 14:55:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 698690 00:19:13.272 14:55:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 698690 ']' 00:19:13.272 14:55:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 698690 00:19:13.272 14:55:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:19:13.272 14:55:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:13.272 14:55:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 698690 00:19:13.272 14:55:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:13.272 14:55:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:13.272 14:55:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 698690' 00:19:13.272 killing process with pid 698690 00:19:13.272 14:55:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 698690 00:19:13.272 14:55:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 698690 00:19:13.530 14:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:13.530 14:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:13.530 14:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:13.530 14:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:19:13.530 14:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:19:13.530 14:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:19:13.530 14:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:13.530 14:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:13.530 14:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:13.530 14:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:13.530 14:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:13.530 14:55:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:15.432 14:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:15.432 14:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.Ocv 00:19:15.432 00:19:15.432 real 0m17.273s 00:19:15.432 user 0m17.573s 00:19:15.432 sys 0m7.449s 00:19:15.432 14:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:15.432 14:55:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:15.432 ************************************ 00:19:15.432 END TEST nvmf_fips 00:19:15.432 ************************************ 00:19:15.695 14:55:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:19:15.695 14:55:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:15.695 14:55:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:15.695 14:55:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:15.695 ************************************ 00:19:15.695 START TEST nvmf_control_msg_list 00:19:15.695 ************************************ 00:19:15.695 14:55:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:19:15.695 * Looking for test storage... 00:19:15.695 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:15.695 14:55:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:15.695 14:55:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lcov --version 00:19:15.695 14:55:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:15.695 14:55:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:15.695 14:55:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:15.695 14:55:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:15.695 14:55:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:15.695 14:55:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:19:15.695 14:55:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:19:15.695 14:55:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:19:15.695 14:55:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:19:15.695 14:55:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:19:15.695 14:55:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:19:15.695 14:55:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:19:15.695 14:55:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:15.695 14:55:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:19:15.695 14:55:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:19:15.695 14:55:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:15.695 14:55:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:15.695 14:55:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:19:15.695 14:55:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:19:15.695 14:55:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:15.695 14:55:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:19:15.695 14:55:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:19:15.695 14:55:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:19:15.695 14:55:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:19:15.695 14:55:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:15.695 14:55:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:19:15.695 14:55:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:19:15.695 14:55:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:15.695 14:55:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:15.695 14:55:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:19:15.695 14:55:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:15.695 14:55:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:15.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:15.695 --rc genhtml_branch_coverage=1 00:19:15.695 --rc genhtml_function_coverage=1 00:19:15.695 --rc genhtml_legend=1 00:19:15.695 --rc geninfo_all_blocks=1 00:19:15.695 --rc geninfo_unexecuted_blocks=1 00:19:15.695 00:19:15.695 ' 00:19:15.695 14:55:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:15.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:15.695 --rc genhtml_branch_coverage=1 00:19:15.695 --rc genhtml_function_coverage=1 00:19:15.695 --rc genhtml_legend=1 00:19:15.695 --rc geninfo_all_blocks=1 00:19:15.695 --rc geninfo_unexecuted_blocks=1 00:19:15.695 00:19:15.695 ' 00:19:15.695 14:55:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:15.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:15.695 --rc genhtml_branch_coverage=1 00:19:15.695 --rc genhtml_function_coverage=1 00:19:15.695 --rc genhtml_legend=1 00:19:15.695 --rc geninfo_all_blocks=1 00:19:15.695 --rc geninfo_unexecuted_blocks=1 00:19:15.695 00:19:15.695 ' 00:19:15.695 14:55:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:15.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:15.695 --rc genhtml_branch_coverage=1 00:19:15.695 --rc genhtml_function_coverage=1 00:19:15.695 --rc genhtml_legend=1 00:19:15.695 --rc geninfo_all_blocks=1 00:19:15.695 --rc geninfo_unexecuted_blocks=1 00:19:15.695 00:19:15.695 ' 00:19:15.695 14:55:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:15.695 14:55:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:19:15.695 14:55:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:15.695 14:55:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:15.695 14:55:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:15.695 14:55:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:15.695 14:55:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:15.695 14:55:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:15.695 14:55:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:15.695 14:55:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:15.695 14:55:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:15.695 14:55:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:15.695 14:55:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:15.695 14:55:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:15.695 14:55:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:15.695 14:55:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:15.695 14:55:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:15.695 14:55:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:15.695 14:55:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:15.695 14:55:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:19:15.695 14:55:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:15.695 14:55:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:15.695 14:55:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:15.695 14:55:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:15.696 14:55:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:15.696 14:55:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:15.696 14:55:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:19:15.696 14:55:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:15.696 14:55:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:19:15.696 14:55:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:15.696 14:55:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:15.696 14:55:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:15.696 14:55:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:15.696 14:55:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:15.696 14:55:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:15.696 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:15.696 14:55:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:15.696 14:55:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:15.696 14:55:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:15.696 14:55:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:19:15.696 14:55:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:15.696 14:55:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:15.696 14:55:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:15.696 14:55:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:15.696 14:55:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:15.696 14:55:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:15.696 14:55:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:15.696 14:55:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:15.696 14:55:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:15.696 14:55:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:15.696 14:55:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:19:15.696 14:55:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:18.297 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:18.297 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:19:18.297 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:18.297 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:18.297 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:18.297 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:18.297 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:18.297 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:19:18.297 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:18.297 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:19:18.297 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:19:18.297 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:19:18.297 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:19:18.297 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:19:18.297 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:19:18.297 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:18.297 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:18.297 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:18.297 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:18.297 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:18.297 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:18.297 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:18.297 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:18.297 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:18.297 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:18.297 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:18.297 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:18.297 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:18.297 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:18.297 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:18.297 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:18.297 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:18.297 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:18.297 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:18.297 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:18.297 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:18.297 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:18.297 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:18.297 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:18.297 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:18.297 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:18.297 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:18.297 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:18.297 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:18.297 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:18.297 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:18.297 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:18.297 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:18.297 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:18.297 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:18.297 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:18.297 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:18.297 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:18.297 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:18.297 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:18.297 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:18.297 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:18.297 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:18.298 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:18.298 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:18.298 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:18.298 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:18.298 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:18.298 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:18.298 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:18.298 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:18.298 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:18.298 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:18.298 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:18.298 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:18.298 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:18.298 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:18.298 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:18.298 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:19:18.298 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:18.298 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:18.298 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:18.298 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:18.298 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:18.298 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:18.298 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:18.298 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:18.298 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:18.298 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:18.298 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:18.298 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:18.298 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:18.298 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:18.298 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:18.298 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:18.298 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:18.298 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:18.298 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:18.298 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:18.298 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:18.298 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:18.298 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:18.298 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:18.298 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:18.298 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:18.298 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:18.298 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.334 ms 00:19:18.298 00:19:18.298 --- 10.0.0.2 ping statistics --- 00:19:18.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:18.298 rtt min/avg/max/mdev = 0.334/0.334/0.334/0.000 ms 00:19:18.298 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:18.298 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:18.298 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:19:18.298 00:19:18.298 --- 10.0.0.1 ping statistics --- 00:19:18.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:18.298 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:19:18.298 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:18.298 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:19:18.298 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:18.298 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:18.298 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:18.298 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:18.298 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:18.298 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:18.298 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:18.298 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:19:18.298 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:18.298 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:18.298 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:18.298 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=702109 00:19:18.298 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:18.298 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 702109 00:19:18.298 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 702109 ']' 00:19:18.298 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:18.298 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:18.298 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:18.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:18.298 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:18.298 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:18.298 [2024-12-11 14:56:00.839407] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:19:18.298 [2024-12-11 14:56:00.839486] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:18.298 [2024-12-11 14:56:00.918225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:18.298 [2024-12-11 14:56:00.973318] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:18.298 [2024-12-11 14:56:00.973389] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:18.298 [2024-12-11 14:56:00.973415] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:18.298 [2024-12-11 14:56:00.973425] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:18.298 [2024-12-11 14:56:00.973435] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:18.298 [2024-12-11 14:56:00.974048] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:18.559 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:18.559 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:19:18.559 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:18.559 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:18.559 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:18.559 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:18.559 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:19:18.559 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:18.559 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:19:18.559 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.559 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:18.559 [2024-12-11 14:56:01.120429] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:18.559 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.559 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:19:18.559 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.559 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:18.559 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.559 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:19:18.559 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.559 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:18.559 Malloc0 00:19:18.559 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.559 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:19:18.559 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.559 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:18.559 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.559 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:18.559 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.559 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:18.559 [2024-12-11 14:56:01.159925] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:18.559 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.559 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=702244 00:19:18.559 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:18.559 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=702246 00:19:18.559 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:18.559 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=702247 00:19:18.559 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:18.559 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 702244 00:19:18.559 [2024-12-11 14:56:01.228453] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:18.559 [2024-12-11 14:56:01.238380] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:18.559 [2024-12-11 14:56:01.238586] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:19.938 Initializing NVMe Controllers 00:19:19.938 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:19.938 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:19:19.938 Initialization complete. Launching workers. 00:19:19.938 ======================================================== 00:19:19.938 Latency(us) 00:19:19.938 Device Information : IOPS MiB/s Average min max 00:19:19.938 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 3327.99 13.00 300.05 164.34 40718.94 00:19:19.938 ======================================================== 00:19:19.938 Total : 3327.99 13.00 300.05 164.34 40718.94 00:19:19.938 00:19:19.938 Initializing NVMe Controllers 00:19:19.938 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:19.938 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:19:19.938 Initialization complete. Launching workers. 00:19:19.938 ======================================================== 00:19:19.938 Latency(us) 00:19:19.938 Device Information : IOPS MiB/s Average min max 00:19:19.938 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 3232.00 12.62 308.97 218.24 40801.66 00:19:19.938 ======================================================== 00:19:19.938 Total : 3232.00 12.62 308.97 218.24 40801.66 00:19:19.938 00:19:19.938 Initializing NVMe Controllers 00:19:19.938 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:19.938 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:19:19.938 Initialization complete. Launching workers. 00:19:19.938 ======================================================== 00:19:19.938 Latency(us) 00:19:19.938 Device Information : IOPS MiB/s Average min max 00:19:19.938 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 3491.00 13.64 286.07 169.38 401.01 00:19:19.938 ======================================================== 00:19:19.938 Total : 3491.00 13.64 286.07 169.38 401.01 00:19:19.938 00:19:19.938 14:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 702246 00:19:19.938 14:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 702247 00:19:19.938 14:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:19.938 14:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:19:19.938 14:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:19.938 14:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:19:19.938 14:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:19.938 14:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:19:19.938 14:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:19.938 14:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:19.938 rmmod nvme_tcp 00:19:19.938 rmmod nvme_fabrics 00:19:19.938 rmmod nvme_keyring 00:19:19.938 14:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:19.938 14:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:19:19.938 14:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:19:19.938 14:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 702109 ']' 00:19:19.938 14:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 702109 00:19:19.938 14:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 702109 ']' 00:19:19.939 14:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 702109 00:19:19.939 14:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:19:19.939 14:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:19.939 14:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 702109 00:19:19.939 14:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:19.939 14:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:19.939 14:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 702109' 00:19:19.939 killing process with pid 702109 00:19:19.939 14:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 702109 00:19:19.939 14:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 702109 00:19:19.939 14:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:19.939 14:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:19.939 14:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:19.939 14:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:19:19.939 14:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:19:19.939 14:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:19.939 14:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:19:19.939 14:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:19.939 14:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:19.939 14:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:19.939 14:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:19.939 14:56:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:22.478 14:56:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:22.478 00:19:22.478 real 0m6.477s 00:19:22.478 user 0m5.456s 00:19:22.478 sys 0m2.862s 00:19:22.478 14:56:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:22.478 14:56:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:22.478 ************************************ 00:19:22.478 END TEST nvmf_control_msg_list 00:19:22.478 ************************************ 00:19:22.478 14:56:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:19:22.478 14:56:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:22.478 14:56:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:22.478 14:56:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:22.478 ************************************ 00:19:22.478 START TEST nvmf_wait_for_buf 00:19:22.478 ************************************ 00:19:22.478 14:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:19:22.478 * Looking for test storage... 00:19:22.478 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:22.478 14:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:22.478 14:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lcov --version 00:19:22.478 14:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:22.478 14:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:22.478 14:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:22.478 14:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:22.478 14:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:22.478 14:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:19:22.478 14:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:19:22.478 14:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:19:22.478 14:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:19:22.478 14:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:19:22.478 14:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:19:22.478 14:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:19:22.478 14:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:22.478 14:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:19:22.478 14:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:19:22.478 14:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:22.478 14:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:22.478 14:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:19:22.478 14:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:19:22.478 14:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:22.478 14:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:19:22.478 14:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:19:22.478 14:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:19:22.478 14:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:19:22.478 14:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:22.478 14:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:19:22.478 14:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:19:22.479 14:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:22.479 14:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:22.479 14:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:19:22.479 14:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:22.479 14:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:22.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:22.479 --rc genhtml_branch_coverage=1 00:19:22.479 --rc genhtml_function_coverage=1 00:19:22.479 --rc genhtml_legend=1 00:19:22.479 --rc geninfo_all_blocks=1 00:19:22.479 --rc geninfo_unexecuted_blocks=1 00:19:22.479 00:19:22.479 ' 00:19:22.479 14:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:22.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:22.479 --rc genhtml_branch_coverage=1 00:19:22.479 --rc genhtml_function_coverage=1 00:19:22.479 --rc genhtml_legend=1 00:19:22.479 --rc geninfo_all_blocks=1 00:19:22.479 --rc geninfo_unexecuted_blocks=1 00:19:22.479 00:19:22.479 ' 00:19:22.479 14:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:22.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:22.479 --rc genhtml_branch_coverage=1 00:19:22.479 --rc genhtml_function_coverage=1 00:19:22.479 --rc genhtml_legend=1 00:19:22.479 --rc geninfo_all_blocks=1 00:19:22.479 --rc geninfo_unexecuted_blocks=1 00:19:22.479 00:19:22.479 ' 00:19:22.479 14:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:22.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:22.479 --rc genhtml_branch_coverage=1 00:19:22.479 --rc genhtml_function_coverage=1 00:19:22.479 --rc genhtml_legend=1 00:19:22.479 --rc geninfo_all_blocks=1 00:19:22.479 --rc geninfo_unexecuted_blocks=1 00:19:22.479 00:19:22.479 ' 00:19:22.479 14:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:22.479 14:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:19:22.479 14:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:22.479 14:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:22.479 14:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:22.479 14:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:22.479 14:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:22.479 14:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:22.479 14:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:22.479 14:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:22.479 14:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:22.479 14:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:22.479 14:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:22.479 14:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:22.479 14:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:22.479 14:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:22.479 14:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:22.479 14:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:22.479 14:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:22.479 14:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:19:22.479 14:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:22.479 14:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:22.479 14:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:22.479 14:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:22.479 14:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:22.479 14:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:22.479 14:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:19:22.479 14:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:22.479 14:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:19:22.479 14:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:22.479 14:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:22.479 14:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:22.479 14:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:22.479 14:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:22.480 14:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:22.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:22.480 14:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:22.480 14:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:22.480 14:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:22.480 14:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:19:22.480 14:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:22.480 14:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:22.480 14:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:22.480 14:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:22.480 14:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:22.480 14:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:22.480 14:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:22.480 14:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:22.480 14:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:22.480 14:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:22.480 14:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:19:22.480 14:56:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:24.385 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:24.385 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:19:24.385 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:24.385 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:24.385 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:24.385 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:24.385 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:24.385 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:19:24.385 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:24.385 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:19:24.385 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:19:24.385 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:19:24.385 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:19:24.385 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:19:24.385 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:19:24.385 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:24.385 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:24.385 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:24.385 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:24.385 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:24.385 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:24.385 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:24.385 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:24.385 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:24.386 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:24.386 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:24.386 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:24.386 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:24.386 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:24.386 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:24.386 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:24.386 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:24.386 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:24.386 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:24.386 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:24.386 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:24.386 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:24.386 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:24.386 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:24.386 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:24.386 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:24.386 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:24.386 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:24.386 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:24.386 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:24.386 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:24.386 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:24.386 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:24.386 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:24.386 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:24.386 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:24.386 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:24.386 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:24.386 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:24.386 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:24.386 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:24.386 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:24.386 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:24.386 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:24.386 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:24.386 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:24.386 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:24.386 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:24.386 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:24.386 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:24.386 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:24.386 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:24.386 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:24.386 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:24.386 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:24.386 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:24.386 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:24.386 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:24.386 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:19:24.386 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:24.386 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:24.386 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:24.386 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:24.386 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:24.386 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:24.386 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:24.386 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:24.386 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:24.386 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:24.386 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:24.386 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:24.386 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:24.386 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:24.386 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:24.386 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:24.386 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:24.386 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:24.645 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:24.645 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:24.645 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:24.645 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:24.645 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:24.645 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:24.645 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:24.645 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:24.645 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:24.645 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:19:24.645 00:19:24.645 --- 10.0.0.2 ping statistics --- 00:19:24.645 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:24.645 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:19:24.645 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:24.645 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:24.645 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:19:24.645 00:19:24.645 --- 10.0.0.1 ping statistics --- 00:19:24.645 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:24.645 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:19:24.645 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:24.645 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:19:24.645 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:24.645 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:24.645 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:24.645 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:24.645 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:24.645 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:24.645 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:24.645 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:19:24.645 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:24.645 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:24.645 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:24.645 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=704327 00:19:24.645 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:19:24.645 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 704327 00:19:24.645 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 704327 ']' 00:19:24.645 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:24.645 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:24.645 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:24.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:24.645 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:24.645 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:24.903 [2024-12-11 14:56:07.434048] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:19:24.903 [2024-12-11 14:56:07.434119] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:24.903 [2024-12-11 14:56:07.506076] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:24.903 [2024-12-11 14:56:07.564201] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:24.903 [2024-12-11 14:56:07.564261] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:24.903 [2024-12-11 14:56:07.564301] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:24.903 [2024-12-11 14:56:07.564313] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:24.904 [2024-12-11 14:56:07.564323] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:24.904 [2024-12-11 14:56:07.565016] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:24.904 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:24.904 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:19:24.904 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:24.904 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:24.904 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:25.162 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:25.162 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:19:25.162 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:25.162 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:19:25.162 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.162 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:25.162 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.162 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:19:25.162 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.162 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:25.162 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.162 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:19:25.162 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.162 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:25.162 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.162 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:19:25.162 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.162 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:25.162 Malloc0 00:19:25.162 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.162 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:19:25.162 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.162 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:25.162 [2024-12-11 14:56:07.812926] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:25.162 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.162 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:19:25.162 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.162 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:25.162 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.162 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:19:25.162 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.162 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:25.162 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.162 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:25.162 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.162 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:25.162 [2024-12-11 14:56:07.837133] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:25.162 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.162 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:25.162 [2024-12-11 14:56:07.924677] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:27.069 Initializing NVMe Controllers 00:19:27.069 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:27.069 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:19:27.069 Initialization complete. Launching workers. 00:19:27.069 ======================================================== 00:19:27.069 Latency(us) 00:19:27.069 Device Information : IOPS MiB/s Average min max 00:19:27.069 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 129.00 16.12 32260.48 7992.20 63847.85 00:19:27.069 ======================================================== 00:19:27.069 Total : 129.00 16.12 32260.48 7992.20 63847.85 00:19:27.069 00:19:27.069 14:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:19:27.069 14:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.069 14:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:27.069 14:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:19:27.069 14:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.069 14:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2038 00:19:27.069 14:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2038 -eq 0 ]] 00:19:27.069 14:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:27.069 14:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:19:27.069 14:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:27.069 14:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:19:27.069 14:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:27.069 14:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:19:27.069 14:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:27.069 14:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:27.069 rmmod nvme_tcp 00:19:27.069 rmmod nvme_fabrics 00:19:27.069 rmmod nvme_keyring 00:19:27.069 14:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:27.069 14:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:19:27.069 14:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:19:27.069 14:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 704327 ']' 00:19:27.069 14:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 704327 00:19:27.069 14:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 704327 ']' 00:19:27.069 14:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 704327 00:19:27.069 14:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:19:27.069 14:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:27.069 14:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 704327 00:19:27.069 14:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:27.069 14:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:27.069 14:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 704327' 00:19:27.069 killing process with pid 704327 00:19:27.069 14:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 704327 00:19:27.069 14:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 704327 00:19:27.069 14:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:27.069 14:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:27.069 14:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:27.069 14:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:19:27.069 14:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:19:27.069 14:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:27.069 14:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:19:27.069 14:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:27.069 14:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:27.069 14:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:27.069 14:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:27.069 14:56:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:29.612 14:56:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:29.612 00:19:29.612 real 0m6.988s 00:19:29.612 user 0m3.225s 00:19:29.612 sys 0m2.155s 00:19:29.612 14:56:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:29.612 14:56:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:29.612 ************************************ 00:19:29.612 END TEST nvmf_wait_for_buf 00:19:29.612 ************************************ 00:19:29.612 14:56:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:19:29.612 14:56:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:19:29.612 14:56:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:19:29.612 14:56:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:19:29.612 14:56:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:19:29.612 14:56:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:31.515 14:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:31.515 14:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:19:31.515 14:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:31.515 14:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:31.515 14:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:31.515 14:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:31.515 14:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:31.515 14:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:19:31.515 14:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:31.515 14:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:19:31.515 14:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:19:31.515 14:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:19:31.516 14:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:19:31.516 14:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:19:31.516 14:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:19:31.516 14:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:31.516 14:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:31.516 14:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:31.516 14:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:31.516 14:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:31.516 14:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:31.516 14:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:31.516 14:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:31.516 14:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:31.516 14:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:31.516 14:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:31.516 14:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:31.516 14:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:31.516 14:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:31.516 14:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:31.516 14:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:31.516 14:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:31.516 14:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:31.516 14:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:31.516 14:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:31.516 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:31.516 14:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:31.516 14:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:31.516 14:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:31.516 14:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:31.516 14:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:31.516 14:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:31.516 14:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:31.516 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:31.516 14:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:31.516 14:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:31.516 14:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:31.516 14:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:31.516 14:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:31.516 14:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:31.516 14:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:31.516 14:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:31.516 14:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:31.516 14:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:31.516 14:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:31.516 14:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:31.516 14:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:31.516 14:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:31.516 14:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:31.516 14:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:31.516 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:31.516 14:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:31.516 14:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:31.516 14:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:31.516 14:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:31.516 14:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:31.516 14:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:31.516 14:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:31.516 14:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:31.516 14:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:31.516 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:31.516 14:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:31.516 14:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:31.516 14:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:31.516 14:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:19:31.516 14:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:31.516 14:56:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:31.516 14:56:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:31.516 14:56:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:31.516 ************************************ 00:19:31.516 START TEST nvmf_perf_adq 00:19:31.516 ************************************ 00:19:31.516 14:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:31.516 * Looking for test storage... 00:19:31.516 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:31.516 14:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:31.516 14:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lcov --version 00:19:31.516 14:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:31.516 14:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:31.516 14:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:31.516 14:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:31.516 14:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:31.516 14:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:19:31.516 14:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:19:31.516 14:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:19:31.516 14:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:19:31.516 14:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:19:31.516 14:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:19:31.516 14:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:19:31.516 14:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:31.516 14:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:19:31.516 14:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:19:31.516 14:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:31.516 14:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:31.516 14:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:19:31.516 14:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:19:31.516 14:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:31.516 14:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:19:31.516 14:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:19:31.516 14:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:19:31.516 14:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:19:31.516 14:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:31.516 14:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:19:31.516 14:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:19:31.516 14:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:31.516 14:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:31.516 14:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:19:31.516 14:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:31.516 14:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:31.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:31.516 --rc genhtml_branch_coverage=1 00:19:31.516 --rc genhtml_function_coverage=1 00:19:31.516 --rc genhtml_legend=1 00:19:31.516 --rc geninfo_all_blocks=1 00:19:31.516 --rc geninfo_unexecuted_blocks=1 00:19:31.516 00:19:31.516 ' 00:19:31.516 14:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:31.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:31.516 --rc genhtml_branch_coverage=1 00:19:31.516 --rc genhtml_function_coverage=1 00:19:31.516 --rc genhtml_legend=1 00:19:31.516 --rc geninfo_all_blocks=1 00:19:31.516 --rc geninfo_unexecuted_blocks=1 00:19:31.516 00:19:31.516 ' 00:19:31.516 14:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:31.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:31.516 --rc genhtml_branch_coverage=1 00:19:31.516 --rc genhtml_function_coverage=1 00:19:31.516 --rc genhtml_legend=1 00:19:31.516 --rc geninfo_all_blocks=1 00:19:31.517 --rc geninfo_unexecuted_blocks=1 00:19:31.517 00:19:31.517 ' 00:19:31.517 14:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:31.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:31.517 --rc genhtml_branch_coverage=1 00:19:31.517 --rc genhtml_function_coverage=1 00:19:31.517 --rc genhtml_legend=1 00:19:31.517 --rc geninfo_all_blocks=1 00:19:31.517 --rc geninfo_unexecuted_blocks=1 00:19:31.517 00:19:31.517 ' 00:19:31.517 14:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:31.517 14:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:19:31.517 14:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:31.517 14:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:31.517 14:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:31.517 14:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:31.517 14:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:31.517 14:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:31.517 14:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:31.517 14:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:31.517 14:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:31.517 14:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:31.517 14:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:31.517 14:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:31.517 14:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:31.517 14:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:31.517 14:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:31.517 14:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:31.517 14:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:31.517 14:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:19:31.517 14:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:31.517 14:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:31.517 14:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:31.517 14:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:31.517 14:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:31.517 14:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:31.517 14:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:19:31.517 14:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:31.517 14:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:19:31.517 14:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:31.517 14:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:31.517 14:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:31.517 14:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:31.517 14:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:31.517 14:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:31.517 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:31.517 14:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:31.517 14:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:31.517 14:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:31.517 14:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:19:31.517 14:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:19:31.517 14:56:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:34.049 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:34.049 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:19:34.049 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:34.049 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:34.049 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:34.049 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:34.049 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:34.049 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:19:34.049 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:34.049 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:19:34.049 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:19:34.049 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:19:34.049 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:19:34.049 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:19:34.049 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:19:34.049 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:34.049 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:34.049 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:34.049 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:34.049 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:34.049 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:34.049 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:34.049 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:34.049 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:34.049 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:34.049 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:34.049 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:34.049 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:34.049 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:34.049 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:34.049 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:34.049 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:34.049 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:34.049 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:34.049 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:34.049 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:34.049 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:34.049 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:34.049 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:34.049 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:34.049 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:34.049 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:34.049 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:34.049 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:34.049 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:34.049 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:34.049 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:34.049 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:34.049 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:34.049 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:34.049 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:34.049 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:34.049 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:34.049 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:34.049 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:34.049 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:34.049 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:34.049 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:34.049 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:34.049 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:34.049 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:34.049 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:34.049 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:34.049 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:34.049 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:34.049 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:34.049 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:34.049 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:34.049 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:34.049 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:34.049 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:34.049 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:34.049 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:34.049 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:34.049 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:19:34.049 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:34.049 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:19:34.049 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:19:34.049 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:19:34.308 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:19:36.845 14:56:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:19:42.120 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:19:42.120 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:42.120 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:42.120 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:42.120 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:42.120 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:42.120 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:42.120 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:42.120 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:42.120 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:42.120 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:42.120 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:19:42.120 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:42.120 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:42.120 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:19:42.120 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:42.120 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:42.120 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:42.120 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:42.120 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:42.120 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:19:42.120 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:42.120 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:19:42.120 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:19:42.120 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:19:42.120 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:19:42.120 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:19:42.120 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:19:42.120 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:42.120 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:42.120 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:42.120 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:42.120 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:42.120 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:42.120 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:42.120 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:42.120 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:42.120 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:42.120 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:42.120 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:42.120 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:42.120 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:42.120 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:42.120 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:42.120 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:42.120 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:42.120 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:42.120 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:42.120 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:42.120 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:42.120 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:42.120 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:42.120 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:42.120 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:42.120 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:42.120 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:42.120 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:42.120 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:42.120 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:42.120 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:42.120 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:42.120 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:42.120 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:42.120 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:42.120 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:42.120 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:42.120 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:42.121 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:42.121 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:42.121 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:42.121 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:42.121 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:42.121 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:42.121 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:42.121 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:42.121 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:42.121 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:42.121 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:42.121 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:42.121 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:42.121 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:42.121 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:42.121 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:42.121 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:42.121 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:42.121 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:42.121 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:19:42.121 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:42.121 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:42.121 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:42.121 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:42.121 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:42.121 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:42.121 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:42.121 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:42.121 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:42.121 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:42.121 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:42.121 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:42.121 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:42.121 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:42.121 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:42.121 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:42.121 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:42.121 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:42.121 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:42.121 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:42.121 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:42.121 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:42.121 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:42.121 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:42.121 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:42.121 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:42.121 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:42.121 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.226 ms 00:19:42.121 00:19:42.121 --- 10.0.0.2 ping statistics --- 00:19:42.121 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:42.121 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:19:42.121 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:42.121 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:42.121 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:19:42.121 00:19:42.121 --- 10.0.0.1 ping statistics --- 00:19:42.121 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:42.121 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:19:42.121 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:42.121 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:19:42.121 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:42.121 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:42.121 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:42.121 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:42.121 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:42.121 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:42.121 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:42.121 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:42.121 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:42.121 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:42.121 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:42.121 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=709170 00:19:42.121 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:42.121 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 709170 00:19:42.121 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 709170 ']' 00:19:42.121 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:42.121 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:42.121 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:42.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:42.121 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:42.121 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:42.121 [2024-12-11 14:56:24.532905] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:19:42.121 [2024-12-11 14:56:24.532989] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:42.121 [2024-12-11 14:56:24.624051] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:42.121 [2024-12-11 14:56:24.698219] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:42.121 [2024-12-11 14:56:24.698274] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:42.121 [2024-12-11 14:56:24.698313] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:42.121 [2024-12-11 14:56:24.698336] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:42.121 [2024-12-11 14:56:24.698354] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:42.121 [2024-12-11 14:56:24.700275] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:19:42.121 [2024-12-11 14:56:24.700340] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:19:42.121 [2024-12-11 14:56:24.700414] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:42.121 [2024-12-11 14:56:24.700405] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:19:42.121 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:42.121 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:19:42.121 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:42.121 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:42.121 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:42.121 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:42.121 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:19:42.121 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:19:42.121 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.121 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:19:42.121 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:42.380 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.380 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:19:42.380 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:19:42.380 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.380 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:42.380 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.380 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:19:42.380 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.380 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:42.380 14:56:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.380 14:56:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:19:42.380 14:56:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.380 14:56:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:42.380 [2024-12-11 14:56:25.033299] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:42.380 14:56:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.380 14:56:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:42.380 14:56:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.380 14:56:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:42.380 Malloc1 00:19:42.380 14:56:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.380 14:56:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:42.380 14:56:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.380 14:56:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:42.380 14:56:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.380 14:56:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:42.380 14:56:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.380 14:56:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:42.380 14:56:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.380 14:56:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:42.380 14:56:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.380 14:56:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:42.380 [2024-12-11 14:56:25.099966] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:42.380 14:56:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.380 14:56:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=709267 00:19:42.380 14:56:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:42.380 14:56:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:19:44.915 14:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:19:44.915 14:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.915 14:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:44.915 14:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.915 14:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:19:44.915 "tick_rate": 2700000000, 00:19:44.915 "poll_groups": [ 00:19:44.915 { 00:19:44.915 "name": "nvmf_tgt_poll_group_000", 00:19:44.915 "admin_qpairs": 1, 00:19:44.915 "io_qpairs": 1, 00:19:44.915 "current_admin_qpairs": 1, 00:19:44.915 "current_io_qpairs": 1, 00:19:44.915 "pending_bdev_io": 0, 00:19:44.915 "completed_nvme_io": 19758, 00:19:44.915 "transports": [ 00:19:44.915 { 00:19:44.915 "trtype": "TCP" 00:19:44.915 } 00:19:44.915 ] 00:19:44.915 }, 00:19:44.915 { 00:19:44.915 "name": "nvmf_tgt_poll_group_001", 00:19:44.915 "admin_qpairs": 0, 00:19:44.915 "io_qpairs": 1, 00:19:44.915 "current_admin_qpairs": 0, 00:19:44.915 "current_io_qpairs": 1, 00:19:44.915 "pending_bdev_io": 0, 00:19:44.915 "completed_nvme_io": 20656, 00:19:44.915 "transports": [ 00:19:44.915 { 00:19:44.915 "trtype": "TCP" 00:19:44.915 } 00:19:44.915 ] 00:19:44.915 }, 00:19:44.915 { 00:19:44.915 "name": "nvmf_tgt_poll_group_002", 00:19:44.915 "admin_qpairs": 0, 00:19:44.915 "io_qpairs": 1, 00:19:44.915 "current_admin_qpairs": 0, 00:19:44.915 "current_io_qpairs": 1, 00:19:44.915 "pending_bdev_io": 0, 00:19:44.915 "completed_nvme_io": 19940, 00:19:44.915 "transports": [ 00:19:44.915 { 00:19:44.915 "trtype": "TCP" 00:19:44.915 } 00:19:44.915 ] 00:19:44.915 }, 00:19:44.915 { 00:19:44.915 "name": "nvmf_tgt_poll_group_003", 00:19:44.915 "admin_qpairs": 0, 00:19:44.915 "io_qpairs": 1, 00:19:44.915 "current_admin_qpairs": 0, 00:19:44.915 "current_io_qpairs": 1, 00:19:44.915 "pending_bdev_io": 0, 00:19:44.915 "completed_nvme_io": 19892, 00:19:44.915 "transports": [ 00:19:44.915 { 00:19:44.915 "trtype": "TCP" 00:19:44.915 } 00:19:44.915 ] 00:19:44.915 } 00:19:44.915 ] 00:19:44.915 }' 00:19:44.915 14:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:19:44.915 14:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:19:44.915 14:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:19:44.915 14:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:19:44.915 14:56:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 709267 00:19:53.035 Initializing NVMe Controllers 00:19:53.035 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:53.035 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:19:53.035 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:19:53.035 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:19:53.035 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:19:53.035 Initialization complete. Launching workers. 00:19:53.035 ======================================================== 00:19:53.035 Latency(us) 00:19:53.035 Device Information : IOPS MiB/s Average min max 00:19:53.035 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10191.20 39.81 6281.74 2583.60 10326.55 00:19:53.035 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10671.30 41.68 5998.31 2495.70 9993.98 00:19:53.035 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10181.10 39.77 6286.50 2267.99 11158.04 00:19:53.035 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10161.80 39.69 6297.87 2451.19 10662.64 00:19:53.035 ======================================================== 00:19:53.035 Total : 41205.40 160.96 6213.49 2267.99 11158.04 00:19:53.035 00:19:53.035 14:56:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:19:53.035 14:56:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:53.035 14:56:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:19:53.035 14:56:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:53.035 14:56:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:19:53.035 14:56:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:53.035 14:56:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:53.035 rmmod nvme_tcp 00:19:53.035 rmmod nvme_fabrics 00:19:53.035 rmmod nvme_keyring 00:19:53.035 14:56:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:53.035 14:56:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:19:53.035 14:56:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:19:53.035 14:56:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 709170 ']' 00:19:53.035 14:56:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 709170 00:19:53.035 14:56:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 709170 ']' 00:19:53.035 14:56:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 709170 00:19:53.035 14:56:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:19:53.035 14:56:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:53.035 14:56:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 709170 00:19:53.035 14:56:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:53.035 14:56:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:53.035 14:56:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 709170' 00:19:53.035 killing process with pid 709170 00:19:53.035 14:56:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 709170 00:19:53.035 14:56:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 709170 00:19:53.035 14:56:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:53.035 14:56:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:53.035 14:56:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:53.035 14:56:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:19:53.035 14:56:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:19:53.035 14:56:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:53.035 14:56:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:19:53.035 14:56:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:53.035 14:56:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:53.035 14:56:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:53.035 14:56:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:53.035 14:56:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:54.944 14:56:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:54.944 14:56:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:19:54.944 14:56:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:19:54.944 14:56:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:19:55.923 14:56:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:19:58.488 14:56:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:20:03.765 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:20:03.765 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:03.765 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:03.765 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:03.765 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:03.765 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:03.765 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:03.765 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:03.765 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:03.765 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:03.765 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:03.765 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:20:03.765 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:03.765 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:03.765 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:20:03.765 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:03.765 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:03.765 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:03.765 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:03.765 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:03.765 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:20:03.765 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:03.765 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:20:03.765 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:20:03.765 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:20:03.765 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:20:03.765 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:20:03.765 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:20:03.765 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:03.765 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:03.765 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:03.765 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:03.765 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:03.765 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:03.765 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:03.765 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:03.765 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:03.765 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:03.765 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:03.765 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:03.765 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:03.765 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:03.765 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:03.765 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:03.765 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:03.765 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:03.765 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:03.765 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:03.765 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:03.765 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:03.765 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:03.765 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:03.765 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:03.765 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:03.765 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:03.765 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:03.765 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:03.765 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:03.765 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:03.765 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:03.765 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:03.765 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:03.765 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:03.765 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:03.765 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:03.765 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:03.765 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:03.765 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:03.765 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:03.765 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:03.765 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:03.765 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:03.765 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:03.765 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:03.765 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:03.765 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:03.765 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:03.765 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:03.765 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:03.765 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:03.765 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:03.765 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:03.765 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:03.765 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:03.765 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:03.765 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:03.765 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:20:03.766 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:03.766 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:03.766 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:03.766 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:03.766 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:03.766 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:03.766 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:03.766 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:03.766 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:03.766 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:03.766 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:03.766 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:03.766 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:03.766 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:03.766 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:03.766 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:03.766 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:03.766 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:03.766 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:03.766 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:03.766 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:03.766 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:03.766 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:03.766 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:03.766 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:03.766 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:03.766 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:03.766 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.287 ms 00:20:03.766 00:20:03.766 --- 10.0.0.2 ping statistics --- 00:20:03.766 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:03.766 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:20:03.766 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:03.766 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:03.766 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:20:03.766 00:20:03.766 --- 10.0.0.1 ping statistics --- 00:20:03.766 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:03.766 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:20:03.766 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:03.766 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:20:03.766 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:03.766 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:03.766 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:03.766 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:03.766 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:03.766 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:03.766 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:03.766 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:20:03.766 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:20:03.766 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:20:03.766 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:20:03.766 net.core.busy_poll = 1 00:20:03.766 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:20:03.766 net.core.busy_read = 1 00:20:03.766 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:20:03.766 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:20:03.766 14:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:20:03.766 14:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:20:03.766 14:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:20:03.766 14:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:03.766 14:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:03.766 14:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:03.766 14:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:03.766 14:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=711942 00:20:03.766 14:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:03.766 14:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 711942 00:20:03.766 14:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 711942 ']' 00:20:03.766 14:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:03.766 14:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:03.766 14:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:03.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:03.766 14:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:03.766 14:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:03.766 [2024-12-11 14:56:46.121295] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:20:03.766 [2024-12-11 14:56:46.121387] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:03.766 [2024-12-11 14:56:46.193589] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:03.766 [2024-12-11 14:56:46.247955] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:03.766 [2024-12-11 14:56:46.248011] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:03.766 [2024-12-11 14:56:46.248040] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:03.766 [2024-12-11 14:56:46.248051] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:03.766 [2024-12-11 14:56:46.248061] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:03.766 [2024-12-11 14:56:46.249471] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:20:03.766 [2024-12-11 14:56:46.249534] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:20:03.766 [2024-12-11 14:56:46.249667] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:20:03.766 [2024-12-11 14:56:46.249670] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:03.766 14:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:03.766 14:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:20:03.766 14:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:03.766 14:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:03.766 14:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:03.766 14:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:03.766 14:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:20:03.766 14:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:03.766 14:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:03.766 14:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.766 14:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:03.766 14:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.766 14:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:03.766 14:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:20:03.766 14:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.766 14:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:03.766 14:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.766 14:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:03.766 14:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.766 14:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:03.766 14:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.766 14:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:20:03.766 14:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.766 14:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:03.766 [2024-12-11 14:56:46.523845] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:03.766 14:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.767 14:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:03.767 14:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.767 14:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:04.025 Malloc1 00:20:04.025 14:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.025 14:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:04.025 14:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.025 14:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:04.025 14:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.025 14:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:04.025 14:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.025 14:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:04.025 14:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.025 14:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:04.025 14:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.025 14:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:04.025 [2024-12-11 14:56:46.592558] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:04.025 14:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.025 14:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=711980 00:20:04.025 14:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:20:04.025 14:56:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:05.929 14:56:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:20:05.929 14:56:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.929 14:56:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:05.930 14:56:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.930 14:56:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:20:05.930 "tick_rate": 2700000000, 00:20:05.930 "poll_groups": [ 00:20:05.930 { 00:20:05.930 "name": "nvmf_tgt_poll_group_000", 00:20:05.930 "admin_qpairs": 1, 00:20:05.930 "io_qpairs": 2, 00:20:05.930 "current_admin_qpairs": 1, 00:20:05.930 "current_io_qpairs": 2, 00:20:05.930 "pending_bdev_io": 0, 00:20:05.930 "completed_nvme_io": 25603, 00:20:05.930 "transports": [ 00:20:05.930 { 00:20:05.930 "trtype": "TCP" 00:20:05.930 } 00:20:05.930 ] 00:20:05.930 }, 00:20:05.930 { 00:20:05.930 "name": "nvmf_tgt_poll_group_001", 00:20:05.930 "admin_qpairs": 0, 00:20:05.930 "io_qpairs": 2, 00:20:05.930 "current_admin_qpairs": 0, 00:20:05.930 "current_io_qpairs": 2, 00:20:05.930 "pending_bdev_io": 0, 00:20:05.930 "completed_nvme_io": 25726, 00:20:05.930 "transports": [ 00:20:05.930 { 00:20:05.930 "trtype": "TCP" 00:20:05.930 } 00:20:05.930 ] 00:20:05.930 }, 00:20:05.930 { 00:20:05.930 "name": "nvmf_tgt_poll_group_002", 00:20:05.930 "admin_qpairs": 0, 00:20:05.930 "io_qpairs": 0, 00:20:05.930 "current_admin_qpairs": 0, 00:20:05.930 "current_io_qpairs": 0, 00:20:05.930 "pending_bdev_io": 0, 00:20:05.930 "completed_nvme_io": 0, 00:20:05.930 "transports": [ 00:20:05.930 { 00:20:05.930 "trtype": "TCP" 00:20:05.930 } 00:20:05.930 ] 00:20:05.930 }, 00:20:05.930 { 00:20:05.930 "name": "nvmf_tgt_poll_group_003", 00:20:05.930 "admin_qpairs": 0, 00:20:05.930 "io_qpairs": 0, 00:20:05.930 "current_admin_qpairs": 0, 00:20:05.930 "current_io_qpairs": 0, 00:20:05.930 "pending_bdev_io": 0, 00:20:05.930 "completed_nvme_io": 0, 00:20:05.930 "transports": [ 00:20:05.930 { 00:20:05.930 "trtype": "TCP" 00:20:05.930 } 00:20:05.930 ] 00:20:05.930 } 00:20:05.930 ] 00:20:05.930 }' 00:20:05.930 14:56:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:20:05.930 14:56:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:20:05.930 14:56:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:20:05.930 14:56:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:20:05.930 14:56:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 711980 00:20:14.043 Initializing NVMe Controllers 00:20:14.043 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:14.043 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:14.043 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:14.043 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:14.043 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:14.043 Initialization complete. Launching workers. 00:20:14.043 ======================================================== 00:20:14.043 Latency(us) 00:20:14.043 Device Information : IOPS MiB/s Average min max 00:20:14.043 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 6344.97 24.79 10087.24 1943.15 54709.20 00:20:14.043 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 7022.94 27.43 9130.79 1736.56 54048.22 00:20:14.043 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 7934.31 30.99 8068.96 1825.11 54690.62 00:20:14.043 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5712.39 22.31 11216.09 1668.68 54697.40 00:20:14.043 ======================================================== 00:20:14.043 Total : 27014.60 105.53 9484.52 1668.68 54709.20 00:20:14.043 00:20:14.043 14:56:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:20:14.043 14:56:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:14.043 14:56:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:20:14.043 14:56:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:14.043 14:56:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:20:14.043 14:56:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:14.043 14:56:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:14.043 rmmod nvme_tcp 00:20:14.043 rmmod nvme_fabrics 00:20:14.302 rmmod nvme_keyring 00:20:14.302 14:56:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:14.302 14:56:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:20:14.302 14:56:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:20:14.302 14:56:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 711942 ']' 00:20:14.302 14:56:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 711942 00:20:14.302 14:56:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 711942 ']' 00:20:14.302 14:56:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 711942 00:20:14.302 14:56:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:20:14.302 14:56:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:14.302 14:56:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 711942 00:20:14.302 14:56:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:14.302 14:56:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:14.302 14:56:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 711942' 00:20:14.302 killing process with pid 711942 00:20:14.302 14:56:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 711942 00:20:14.302 14:56:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 711942 00:20:14.559 14:56:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:14.559 14:56:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:14.559 14:56:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:14.559 14:56:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:20:14.559 14:56:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:20:14.559 14:56:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:20:14.559 14:56:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:14.559 14:56:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:14.559 14:56:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:14.559 14:56:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:14.559 14:56:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:14.559 14:56:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:16.468 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:16.468 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:20:16.468 00:20:16.468 real 0m45.159s 00:20:16.468 user 2m39.841s 00:20:16.468 sys 0m9.818s 00:20:16.468 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:16.468 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:16.468 ************************************ 00:20:16.468 END TEST nvmf_perf_adq 00:20:16.468 ************************************ 00:20:16.468 14:56:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:16.468 14:56:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:16.468 14:56:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:16.468 14:56:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:16.468 ************************************ 00:20:16.468 START TEST nvmf_shutdown 00:20:16.468 ************************************ 00:20:16.468 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:16.728 * Looking for test storage... 00:20:16.728 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:16.728 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:16.728 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:20:16.728 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:16.728 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:16.728 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:16.728 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:16.728 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:16.728 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:20:16.728 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:20:16.728 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:20:16.728 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:20:16.728 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:20:16.728 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:20:16.728 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:20:16.728 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:16.728 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:20:16.728 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:20:16.728 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:16.728 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:16.728 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:20:16.728 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:20:16.728 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:16.728 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:20:16.728 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:20:16.728 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:20:16.728 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:20:16.728 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:16.728 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:20:16.728 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:20:16.728 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:16.728 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:16.728 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:20:16.728 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:16.728 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:16.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:16.728 --rc genhtml_branch_coverage=1 00:20:16.728 --rc genhtml_function_coverage=1 00:20:16.728 --rc genhtml_legend=1 00:20:16.728 --rc geninfo_all_blocks=1 00:20:16.728 --rc geninfo_unexecuted_blocks=1 00:20:16.728 00:20:16.728 ' 00:20:16.728 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:16.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:16.728 --rc genhtml_branch_coverage=1 00:20:16.728 --rc genhtml_function_coverage=1 00:20:16.728 --rc genhtml_legend=1 00:20:16.728 --rc geninfo_all_blocks=1 00:20:16.728 --rc geninfo_unexecuted_blocks=1 00:20:16.728 00:20:16.729 ' 00:20:16.729 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:16.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:16.729 --rc genhtml_branch_coverage=1 00:20:16.729 --rc genhtml_function_coverage=1 00:20:16.729 --rc genhtml_legend=1 00:20:16.729 --rc geninfo_all_blocks=1 00:20:16.729 --rc geninfo_unexecuted_blocks=1 00:20:16.729 00:20:16.729 ' 00:20:16.729 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:16.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:16.729 --rc genhtml_branch_coverage=1 00:20:16.729 --rc genhtml_function_coverage=1 00:20:16.729 --rc genhtml_legend=1 00:20:16.729 --rc geninfo_all_blocks=1 00:20:16.729 --rc geninfo_unexecuted_blocks=1 00:20:16.729 00:20:16.729 ' 00:20:16.729 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:16.729 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:20:16.729 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:16.729 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:16.729 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:16.729 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:16.729 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:16.729 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:16.729 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:16.729 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:16.729 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:16.729 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:16.729 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:16.729 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:16.729 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:16.729 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:16.729 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:16.729 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:16.729 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:16.729 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:20:16.729 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:16.729 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:16.729 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:16.729 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.729 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.729 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.729 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:20:16.729 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:16.729 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:20:16.729 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:16.729 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:16.729 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:16.729 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:16.729 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:16.729 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:16.729 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:16.729 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:16.729 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:16.729 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:16.729 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:16.729 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:16.729 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:20:16.729 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:16.729 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:16.729 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:16.729 ************************************ 00:20:16.729 START TEST nvmf_shutdown_tc1 00:20:16.729 ************************************ 00:20:16.729 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:20:16.729 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:20:16.729 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:20:16.729 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:16.729 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:16.729 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:16.729 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:16.729 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:16.729 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:16.729 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:16.729 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:16.729 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:16.729 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:16.729 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:20:16.729 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:19.262 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:19.262 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:20:19.262 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:19.262 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:19.262 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:19.262 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:19.262 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:19.262 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:20:19.262 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:19.262 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:20:19.262 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:20:19.262 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:20:19.262 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:20:19.262 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:20:19.262 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:20:19.262 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:19.262 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:19.262 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:19.262 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:19.262 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:19.262 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:19.262 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:19.262 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:19.262 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:19.262 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:19.262 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:19.262 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:19.262 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:19.262 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:19.262 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:19.262 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:19.262 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:19.262 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:19.262 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:19.262 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:19.262 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:19.262 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:19.262 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:19.262 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:19.262 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:19.262 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:19.262 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:19.262 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:19.262 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:19.262 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:19.262 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:19.262 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:19.262 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:19.262 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:19.262 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:19.262 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:19.262 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:19.262 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:19.262 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:19.262 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:19.262 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:19.262 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:19.262 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:19.262 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:19.262 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:19.262 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:19.262 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:19.262 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:19.262 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:19.262 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:19.262 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:19.262 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:19.262 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:19.262 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:19.262 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:19.262 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:19.262 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:19.262 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:19.262 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:20:19.262 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:19.262 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:19.262 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:19.262 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:19.262 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:19.262 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:19.262 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:19.262 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:19.262 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:19.262 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:19.262 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:19.262 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:19.262 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:19.262 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:19.262 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:19.262 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:19.262 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:19.262 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:19.262 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:19.262 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:19.262 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:19.262 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:19.262 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:19.262 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:19.262 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:19.262 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:19.262 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:19.262 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:20:19.262 00:20:19.262 --- 10.0.0.2 ping statistics --- 00:20:19.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:19.262 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:20:19.262 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:19.262 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:19.262 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:20:19.262 00:20:19.262 --- 10.0.0.1 ping statistics --- 00:20:19.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:19.263 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:20:19.263 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:19.263 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:20:19.263 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:19.263 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:19.263 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:19.263 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:19.263 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:19.263 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:19.263 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:19.263 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:20:19.263 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:19.263 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:19.263 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:19.263 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=715365 00:20:19.263 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:19.263 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 715365 00:20:19.263 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 715365 ']' 00:20:19.263 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:19.263 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:19.263 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:19.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:19.263 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:19.263 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:19.263 [2024-12-11 14:57:01.735917] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:20:19.263 [2024-12-11 14:57:01.735996] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:19.263 [2024-12-11 14:57:01.810286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:19.263 [2024-12-11 14:57:01.870157] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:19.263 [2024-12-11 14:57:01.870220] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:19.263 [2024-12-11 14:57:01.870249] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:19.263 [2024-12-11 14:57:01.870260] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:19.263 [2024-12-11 14:57:01.870270] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:19.263 [2024-12-11 14:57:01.872113] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:20:19.263 [2024-12-11 14:57:01.872196] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:20:19.263 [2024-12-11 14:57:01.872137] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:20:19.263 [2024-12-11 14:57:01.872199] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:20:19.263 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:19.263 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:20:19.263 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:19.263 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:19.263 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:19.263 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:19.263 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:19.263 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.263 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:19.263 [2024-12-11 14:57:02.019941] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:19.263 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.263 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:20:19.263 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:20:19.263 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:19.263 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:19.263 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:19.521 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:19.521 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:19.521 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:19.521 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:19.521 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:19.521 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:19.521 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:19.521 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:19.521 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:19.521 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:19.521 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:19.521 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:19.521 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:19.521 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:19.521 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:19.521 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:19.521 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:19.521 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:19.521 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:19.521 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:19.521 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:20:19.521 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.521 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:19.521 Malloc1 00:20:19.521 [2024-12-11 14:57:02.107556] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:19.521 Malloc2 00:20:19.521 Malloc3 00:20:19.521 Malloc4 00:20:19.521 Malloc5 00:20:19.779 Malloc6 00:20:19.779 Malloc7 00:20:19.779 Malloc8 00:20:19.779 Malloc9 00:20:19.779 Malloc10 00:20:19.779 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.779 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:20:19.779 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:19.779 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:20.037 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=715529 00:20:20.037 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 715529 /var/tmp/bdevperf.sock 00:20:20.037 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 715529 ']' 00:20:20.037 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:20.037 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:20:20.037 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:20.037 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:20.037 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:20:20.037 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:20.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:20.037 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:20:20.037 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:20.037 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:20.037 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:20.037 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:20.037 { 00:20:20.037 "params": { 00:20:20.037 "name": "Nvme$subsystem", 00:20:20.037 "trtype": "$TEST_TRANSPORT", 00:20:20.037 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:20.037 "adrfam": "ipv4", 00:20:20.037 "trsvcid": "$NVMF_PORT", 00:20:20.037 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:20.037 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:20.037 "hdgst": ${hdgst:-false}, 00:20:20.037 "ddgst": ${ddgst:-false} 00:20:20.037 }, 00:20:20.037 "method": "bdev_nvme_attach_controller" 00:20:20.037 } 00:20:20.037 EOF 00:20:20.037 )") 00:20:20.037 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:20.037 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:20.037 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:20.037 { 00:20:20.037 "params": { 00:20:20.037 "name": "Nvme$subsystem", 00:20:20.037 "trtype": "$TEST_TRANSPORT", 00:20:20.037 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:20.037 "adrfam": "ipv4", 00:20:20.037 "trsvcid": "$NVMF_PORT", 00:20:20.037 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:20.037 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:20.037 "hdgst": ${hdgst:-false}, 00:20:20.037 "ddgst": ${ddgst:-false} 00:20:20.037 }, 00:20:20.037 "method": "bdev_nvme_attach_controller" 00:20:20.037 } 00:20:20.037 EOF 00:20:20.037 )") 00:20:20.037 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:20.037 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:20.037 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:20.037 { 00:20:20.037 "params": { 00:20:20.037 "name": "Nvme$subsystem", 00:20:20.037 "trtype": "$TEST_TRANSPORT", 00:20:20.037 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:20.037 "adrfam": "ipv4", 00:20:20.037 "trsvcid": "$NVMF_PORT", 00:20:20.037 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:20.037 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:20.037 "hdgst": ${hdgst:-false}, 00:20:20.037 "ddgst": ${ddgst:-false} 00:20:20.037 }, 00:20:20.037 "method": "bdev_nvme_attach_controller" 00:20:20.037 } 00:20:20.037 EOF 00:20:20.037 )") 00:20:20.037 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:20.037 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:20.037 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:20.037 { 00:20:20.037 "params": { 00:20:20.037 "name": "Nvme$subsystem", 00:20:20.037 "trtype": "$TEST_TRANSPORT", 00:20:20.037 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:20.037 "adrfam": "ipv4", 00:20:20.037 "trsvcid": "$NVMF_PORT", 00:20:20.037 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:20.037 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:20.037 "hdgst": ${hdgst:-false}, 00:20:20.037 "ddgst": ${ddgst:-false} 00:20:20.037 }, 00:20:20.037 "method": "bdev_nvme_attach_controller" 00:20:20.037 } 00:20:20.037 EOF 00:20:20.037 )") 00:20:20.037 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:20.037 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:20.037 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:20.037 { 00:20:20.037 "params": { 00:20:20.037 "name": "Nvme$subsystem", 00:20:20.037 "trtype": "$TEST_TRANSPORT", 00:20:20.037 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:20.037 "adrfam": "ipv4", 00:20:20.037 "trsvcid": "$NVMF_PORT", 00:20:20.037 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:20.037 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:20.037 "hdgst": ${hdgst:-false}, 00:20:20.037 "ddgst": ${ddgst:-false} 00:20:20.037 }, 00:20:20.037 "method": "bdev_nvme_attach_controller" 00:20:20.037 } 00:20:20.037 EOF 00:20:20.037 )") 00:20:20.037 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:20.037 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:20.037 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:20.037 { 00:20:20.037 "params": { 00:20:20.037 "name": "Nvme$subsystem", 00:20:20.037 "trtype": "$TEST_TRANSPORT", 00:20:20.037 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:20.037 "adrfam": "ipv4", 00:20:20.037 "trsvcid": "$NVMF_PORT", 00:20:20.037 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:20.037 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:20.037 "hdgst": ${hdgst:-false}, 00:20:20.037 "ddgst": ${ddgst:-false} 00:20:20.037 }, 00:20:20.037 "method": "bdev_nvme_attach_controller" 00:20:20.037 } 00:20:20.037 EOF 00:20:20.037 )") 00:20:20.037 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:20.037 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:20.037 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:20.037 { 00:20:20.037 "params": { 00:20:20.037 "name": "Nvme$subsystem", 00:20:20.037 "trtype": "$TEST_TRANSPORT", 00:20:20.037 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:20.037 "adrfam": "ipv4", 00:20:20.037 "trsvcid": "$NVMF_PORT", 00:20:20.037 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:20.037 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:20.037 "hdgst": ${hdgst:-false}, 00:20:20.037 "ddgst": ${ddgst:-false} 00:20:20.037 }, 00:20:20.037 "method": "bdev_nvme_attach_controller" 00:20:20.037 } 00:20:20.037 EOF 00:20:20.037 )") 00:20:20.037 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:20.037 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:20.037 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:20.037 { 00:20:20.037 "params": { 00:20:20.037 "name": "Nvme$subsystem", 00:20:20.037 "trtype": "$TEST_TRANSPORT", 00:20:20.037 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:20.037 "adrfam": "ipv4", 00:20:20.037 "trsvcid": "$NVMF_PORT", 00:20:20.037 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:20.037 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:20.037 "hdgst": ${hdgst:-false}, 00:20:20.037 "ddgst": ${ddgst:-false} 00:20:20.037 }, 00:20:20.037 "method": "bdev_nvme_attach_controller" 00:20:20.037 } 00:20:20.037 EOF 00:20:20.037 )") 00:20:20.037 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:20.037 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:20.037 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:20.037 { 00:20:20.037 "params": { 00:20:20.037 "name": "Nvme$subsystem", 00:20:20.037 "trtype": "$TEST_TRANSPORT", 00:20:20.037 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:20.037 "adrfam": "ipv4", 00:20:20.037 "trsvcid": "$NVMF_PORT", 00:20:20.037 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:20.037 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:20.037 "hdgst": ${hdgst:-false}, 00:20:20.037 "ddgst": ${ddgst:-false} 00:20:20.037 }, 00:20:20.037 "method": "bdev_nvme_attach_controller" 00:20:20.037 } 00:20:20.037 EOF 00:20:20.037 )") 00:20:20.037 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:20.037 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:20.037 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:20.037 { 00:20:20.037 "params": { 00:20:20.037 "name": "Nvme$subsystem", 00:20:20.037 "trtype": "$TEST_TRANSPORT", 00:20:20.037 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:20.037 "adrfam": "ipv4", 00:20:20.037 "trsvcid": "$NVMF_PORT", 00:20:20.037 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:20.037 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:20.037 "hdgst": ${hdgst:-false}, 00:20:20.037 "ddgst": ${ddgst:-false} 00:20:20.037 }, 00:20:20.037 "method": "bdev_nvme_attach_controller" 00:20:20.037 } 00:20:20.037 EOF 00:20:20.037 )") 00:20:20.037 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:20.037 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:20:20.037 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:20:20.037 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:20.037 "params": { 00:20:20.037 "name": "Nvme1", 00:20:20.037 "trtype": "tcp", 00:20:20.037 "traddr": "10.0.0.2", 00:20:20.037 "adrfam": "ipv4", 00:20:20.037 "trsvcid": "4420", 00:20:20.037 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:20.037 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:20.037 "hdgst": false, 00:20:20.037 "ddgst": false 00:20:20.037 }, 00:20:20.037 "method": "bdev_nvme_attach_controller" 00:20:20.037 },{ 00:20:20.037 "params": { 00:20:20.037 "name": "Nvme2", 00:20:20.037 "trtype": "tcp", 00:20:20.037 "traddr": "10.0.0.2", 00:20:20.037 "adrfam": "ipv4", 00:20:20.037 "trsvcid": "4420", 00:20:20.037 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:20.037 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:20.037 "hdgst": false, 00:20:20.037 "ddgst": false 00:20:20.037 }, 00:20:20.037 "method": "bdev_nvme_attach_controller" 00:20:20.037 },{ 00:20:20.037 "params": { 00:20:20.037 "name": "Nvme3", 00:20:20.037 "trtype": "tcp", 00:20:20.037 "traddr": "10.0.0.2", 00:20:20.037 "adrfam": "ipv4", 00:20:20.037 "trsvcid": "4420", 00:20:20.037 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:20.037 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:20.037 "hdgst": false, 00:20:20.037 "ddgst": false 00:20:20.037 }, 00:20:20.037 "method": "bdev_nvme_attach_controller" 00:20:20.037 },{ 00:20:20.037 "params": { 00:20:20.037 "name": "Nvme4", 00:20:20.037 "trtype": "tcp", 00:20:20.037 "traddr": "10.0.0.2", 00:20:20.037 "adrfam": "ipv4", 00:20:20.037 "trsvcid": "4420", 00:20:20.037 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:20.037 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:20.037 "hdgst": false, 00:20:20.037 "ddgst": false 00:20:20.037 }, 00:20:20.037 "method": "bdev_nvme_attach_controller" 00:20:20.037 },{ 00:20:20.037 "params": { 00:20:20.037 "name": "Nvme5", 00:20:20.037 "trtype": "tcp", 00:20:20.037 "traddr": "10.0.0.2", 00:20:20.037 "adrfam": "ipv4", 00:20:20.037 "trsvcid": "4420", 00:20:20.037 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:20.037 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:20.037 "hdgst": false, 00:20:20.037 "ddgst": false 00:20:20.037 }, 00:20:20.037 "method": "bdev_nvme_attach_controller" 00:20:20.037 },{ 00:20:20.037 "params": { 00:20:20.037 "name": "Nvme6", 00:20:20.037 "trtype": "tcp", 00:20:20.037 "traddr": "10.0.0.2", 00:20:20.037 "adrfam": "ipv4", 00:20:20.037 "trsvcid": "4420", 00:20:20.037 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:20.037 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:20.037 "hdgst": false, 00:20:20.037 "ddgst": false 00:20:20.037 }, 00:20:20.037 "method": "bdev_nvme_attach_controller" 00:20:20.037 },{ 00:20:20.037 "params": { 00:20:20.037 "name": "Nvme7", 00:20:20.037 "trtype": "tcp", 00:20:20.037 "traddr": "10.0.0.2", 00:20:20.037 "adrfam": "ipv4", 00:20:20.037 "trsvcid": "4420", 00:20:20.037 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:20.037 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:20.037 "hdgst": false, 00:20:20.037 "ddgst": false 00:20:20.037 }, 00:20:20.037 "method": "bdev_nvme_attach_controller" 00:20:20.037 },{ 00:20:20.037 "params": { 00:20:20.037 "name": "Nvme8", 00:20:20.037 "trtype": "tcp", 00:20:20.037 "traddr": "10.0.0.2", 00:20:20.037 "adrfam": "ipv4", 00:20:20.037 "trsvcid": "4420", 00:20:20.037 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:20.037 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:20.037 "hdgst": false, 00:20:20.037 "ddgst": false 00:20:20.037 }, 00:20:20.038 "method": "bdev_nvme_attach_controller" 00:20:20.038 },{ 00:20:20.038 "params": { 00:20:20.038 "name": "Nvme9", 00:20:20.038 "trtype": "tcp", 00:20:20.038 "traddr": "10.0.0.2", 00:20:20.038 "adrfam": "ipv4", 00:20:20.038 "trsvcid": "4420", 00:20:20.038 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:20.038 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:20.038 "hdgst": false, 00:20:20.038 "ddgst": false 00:20:20.038 }, 00:20:20.038 "method": "bdev_nvme_attach_controller" 00:20:20.038 },{ 00:20:20.038 "params": { 00:20:20.038 "name": "Nvme10", 00:20:20.038 "trtype": "tcp", 00:20:20.038 "traddr": "10.0.0.2", 00:20:20.038 "adrfam": "ipv4", 00:20:20.038 "trsvcid": "4420", 00:20:20.038 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:20.038 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:20.038 "hdgst": false, 00:20:20.038 "ddgst": false 00:20:20.038 }, 00:20:20.038 "method": "bdev_nvme_attach_controller" 00:20:20.038 }' 00:20:20.038 [2024-12-11 14:57:02.605890] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:20:20.038 [2024-12-11 14:57:02.605969] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:20:20.038 [2024-12-11 14:57:02.679719] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:20.038 [2024-12-11 14:57:02.739751] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:21.941 14:57:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:21.941 14:57:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:20:21.941 14:57:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:21.941 14:57:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.941 14:57:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:21.941 14:57:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.941 14:57:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 715529 00:20:21.941 14:57:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:20:21.941 14:57:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:20:22.881 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 715529 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:20:22.881 14:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 715365 00:20:22.881 14:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:20:22.881 14:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:22.881 14:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:20:22.881 14:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:20:22.881 14:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:22.881 14:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:22.881 { 00:20:22.881 "params": { 00:20:22.881 "name": "Nvme$subsystem", 00:20:22.881 "trtype": "$TEST_TRANSPORT", 00:20:22.881 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:22.881 "adrfam": "ipv4", 00:20:22.881 "trsvcid": "$NVMF_PORT", 00:20:22.881 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:22.881 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:22.881 "hdgst": ${hdgst:-false}, 00:20:22.881 "ddgst": ${ddgst:-false} 00:20:22.881 }, 00:20:22.881 "method": "bdev_nvme_attach_controller" 00:20:22.881 } 00:20:22.881 EOF 00:20:22.881 )") 00:20:22.881 14:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:22.881 14:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:22.881 14:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:22.881 { 00:20:22.881 "params": { 00:20:22.881 "name": "Nvme$subsystem", 00:20:22.881 "trtype": "$TEST_TRANSPORT", 00:20:22.881 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:22.881 "adrfam": "ipv4", 00:20:22.881 "trsvcid": "$NVMF_PORT", 00:20:22.881 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:22.881 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:22.881 "hdgst": ${hdgst:-false}, 00:20:22.881 "ddgst": ${ddgst:-false} 00:20:22.881 }, 00:20:22.881 "method": "bdev_nvme_attach_controller" 00:20:22.881 } 00:20:22.881 EOF 00:20:22.881 )") 00:20:22.881 14:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:22.881 14:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:22.881 14:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:22.881 { 00:20:22.881 "params": { 00:20:22.881 "name": "Nvme$subsystem", 00:20:22.881 "trtype": "$TEST_TRANSPORT", 00:20:22.881 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:22.881 "adrfam": "ipv4", 00:20:22.881 "trsvcid": "$NVMF_PORT", 00:20:22.881 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:22.881 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:22.881 "hdgst": ${hdgst:-false}, 00:20:22.881 "ddgst": ${ddgst:-false} 00:20:22.881 }, 00:20:22.881 "method": "bdev_nvme_attach_controller" 00:20:22.881 } 00:20:22.881 EOF 00:20:22.881 )") 00:20:22.881 14:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:22.881 14:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:22.881 14:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:22.881 { 00:20:22.881 "params": { 00:20:22.881 "name": "Nvme$subsystem", 00:20:22.881 "trtype": "$TEST_TRANSPORT", 00:20:22.881 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:22.881 "adrfam": "ipv4", 00:20:22.881 "trsvcid": "$NVMF_PORT", 00:20:22.881 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:22.881 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:22.881 "hdgst": ${hdgst:-false}, 00:20:22.881 "ddgst": ${ddgst:-false} 00:20:22.881 }, 00:20:22.881 "method": "bdev_nvme_attach_controller" 00:20:22.881 } 00:20:22.881 EOF 00:20:22.881 )") 00:20:22.881 14:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:22.881 14:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:22.881 14:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:22.881 { 00:20:22.881 "params": { 00:20:22.881 "name": "Nvme$subsystem", 00:20:22.881 "trtype": "$TEST_TRANSPORT", 00:20:22.881 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:22.881 "adrfam": "ipv4", 00:20:22.881 "trsvcid": "$NVMF_PORT", 00:20:22.881 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:22.881 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:22.881 "hdgst": ${hdgst:-false}, 00:20:22.881 "ddgst": ${ddgst:-false} 00:20:22.881 }, 00:20:22.881 "method": "bdev_nvme_attach_controller" 00:20:22.881 } 00:20:22.881 EOF 00:20:22.881 )") 00:20:22.881 14:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:22.881 14:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:22.881 14:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:22.881 { 00:20:22.881 "params": { 00:20:22.881 "name": "Nvme$subsystem", 00:20:22.881 "trtype": "$TEST_TRANSPORT", 00:20:22.881 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:22.881 "adrfam": "ipv4", 00:20:22.881 "trsvcid": "$NVMF_PORT", 00:20:22.881 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:22.881 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:22.881 "hdgst": ${hdgst:-false}, 00:20:22.881 "ddgst": ${ddgst:-false} 00:20:22.881 }, 00:20:22.881 "method": "bdev_nvme_attach_controller" 00:20:22.881 } 00:20:22.881 EOF 00:20:22.881 )") 00:20:22.881 14:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:22.881 14:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:22.881 14:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:22.881 { 00:20:22.881 "params": { 00:20:22.881 "name": "Nvme$subsystem", 00:20:22.881 "trtype": "$TEST_TRANSPORT", 00:20:22.881 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:22.881 "adrfam": "ipv4", 00:20:22.881 "trsvcid": "$NVMF_PORT", 00:20:22.881 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:22.881 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:22.881 "hdgst": ${hdgst:-false}, 00:20:22.881 "ddgst": ${ddgst:-false} 00:20:22.881 }, 00:20:22.881 "method": "bdev_nvme_attach_controller" 00:20:22.881 } 00:20:22.881 EOF 00:20:22.881 )") 00:20:22.881 14:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:22.881 14:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:22.882 14:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:22.882 { 00:20:22.882 "params": { 00:20:22.882 "name": "Nvme$subsystem", 00:20:22.882 "trtype": "$TEST_TRANSPORT", 00:20:22.882 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:22.882 "adrfam": "ipv4", 00:20:22.882 "trsvcid": "$NVMF_PORT", 00:20:22.882 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:22.882 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:22.882 "hdgst": ${hdgst:-false}, 00:20:22.882 "ddgst": ${ddgst:-false} 00:20:22.882 }, 00:20:22.882 "method": "bdev_nvme_attach_controller" 00:20:22.882 } 00:20:22.882 EOF 00:20:22.882 )") 00:20:22.882 14:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:23.139 14:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:23.139 14:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:23.139 { 00:20:23.139 "params": { 00:20:23.139 "name": "Nvme$subsystem", 00:20:23.139 "trtype": "$TEST_TRANSPORT", 00:20:23.139 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:23.139 "adrfam": "ipv4", 00:20:23.139 "trsvcid": "$NVMF_PORT", 00:20:23.139 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:23.139 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:23.139 "hdgst": ${hdgst:-false}, 00:20:23.139 "ddgst": ${ddgst:-false} 00:20:23.139 }, 00:20:23.139 "method": "bdev_nvme_attach_controller" 00:20:23.139 } 00:20:23.139 EOF 00:20:23.139 )") 00:20:23.139 14:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:23.139 14:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:23.139 14:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:23.139 { 00:20:23.139 "params": { 00:20:23.139 "name": "Nvme$subsystem", 00:20:23.139 "trtype": "$TEST_TRANSPORT", 00:20:23.139 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:23.139 "adrfam": "ipv4", 00:20:23.139 "trsvcid": "$NVMF_PORT", 00:20:23.139 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:23.139 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:23.139 "hdgst": ${hdgst:-false}, 00:20:23.139 "ddgst": ${ddgst:-false} 00:20:23.140 }, 00:20:23.140 "method": "bdev_nvme_attach_controller" 00:20:23.140 } 00:20:23.140 EOF 00:20:23.140 )") 00:20:23.140 14:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:23.140 14:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:20:23.140 14:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:20:23.140 14:57:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:23.140 "params": { 00:20:23.140 "name": "Nvme1", 00:20:23.140 "trtype": "tcp", 00:20:23.140 "traddr": "10.0.0.2", 00:20:23.140 "adrfam": "ipv4", 00:20:23.140 "trsvcid": "4420", 00:20:23.140 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:23.140 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:23.140 "hdgst": false, 00:20:23.140 "ddgst": false 00:20:23.140 }, 00:20:23.140 "method": "bdev_nvme_attach_controller" 00:20:23.140 },{ 00:20:23.140 "params": { 00:20:23.140 "name": "Nvme2", 00:20:23.140 "trtype": "tcp", 00:20:23.140 "traddr": "10.0.0.2", 00:20:23.140 "adrfam": "ipv4", 00:20:23.140 "trsvcid": "4420", 00:20:23.140 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:23.140 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:23.140 "hdgst": false, 00:20:23.140 "ddgst": false 00:20:23.140 }, 00:20:23.140 "method": "bdev_nvme_attach_controller" 00:20:23.140 },{ 00:20:23.140 "params": { 00:20:23.140 "name": "Nvme3", 00:20:23.140 "trtype": "tcp", 00:20:23.140 "traddr": "10.0.0.2", 00:20:23.140 "adrfam": "ipv4", 00:20:23.140 "trsvcid": "4420", 00:20:23.140 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:23.140 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:23.140 "hdgst": false, 00:20:23.140 "ddgst": false 00:20:23.140 }, 00:20:23.140 "method": "bdev_nvme_attach_controller" 00:20:23.140 },{ 00:20:23.140 "params": { 00:20:23.140 "name": "Nvme4", 00:20:23.140 "trtype": "tcp", 00:20:23.140 "traddr": "10.0.0.2", 00:20:23.140 "adrfam": "ipv4", 00:20:23.140 "trsvcid": "4420", 00:20:23.140 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:23.140 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:23.140 "hdgst": false, 00:20:23.140 "ddgst": false 00:20:23.140 }, 00:20:23.140 "method": "bdev_nvme_attach_controller" 00:20:23.140 },{ 00:20:23.140 "params": { 00:20:23.140 "name": "Nvme5", 00:20:23.140 "trtype": "tcp", 00:20:23.140 "traddr": "10.0.0.2", 00:20:23.140 "adrfam": "ipv4", 00:20:23.140 "trsvcid": "4420", 00:20:23.140 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:23.140 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:23.140 "hdgst": false, 00:20:23.140 "ddgst": false 00:20:23.140 }, 00:20:23.140 "method": "bdev_nvme_attach_controller" 00:20:23.140 },{ 00:20:23.140 "params": { 00:20:23.140 "name": "Nvme6", 00:20:23.140 "trtype": "tcp", 00:20:23.140 "traddr": "10.0.0.2", 00:20:23.140 "adrfam": "ipv4", 00:20:23.140 "trsvcid": "4420", 00:20:23.140 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:23.140 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:23.140 "hdgst": false, 00:20:23.140 "ddgst": false 00:20:23.140 }, 00:20:23.140 "method": "bdev_nvme_attach_controller" 00:20:23.140 },{ 00:20:23.140 "params": { 00:20:23.140 "name": "Nvme7", 00:20:23.140 "trtype": "tcp", 00:20:23.140 "traddr": "10.0.0.2", 00:20:23.140 "adrfam": "ipv4", 00:20:23.140 "trsvcid": "4420", 00:20:23.140 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:23.140 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:23.140 "hdgst": false, 00:20:23.140 "ddgst": false 00:20:23.140 }, 00:20:23.140 "method": "bdev_nvme_attach_controller" 00:20:23.140 },{ 00:20:23.140 "params": { 00:20:23.140 "name": "Nvme8", 00:20:23.140 "trtype": "tcp", 00:20:23.140 "traddr": "10.0.0.2", 00:20:23.140 "adrfam": "ipv4", 00:20:23.140 "trsvcid": "4420", 00:20:23.140 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:23.140 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:23.140 "hdgst": false, 00:20:23.140 "ddgst": false 00:20:23.140 }, 00:20:23.140 "method": "bdev_nvme_attach_controller" 00:20:23.140 },{ 00:20:23.140 "params": { 00:20:23.140 "name": "Nvme9", 00:20:23.140 "trtype": "tcp", 00:20:23.140 "traddr": "10.0.0.2", 00:20:23.140 "adrfam": "ipv4", 00:20:23.140 "trsvcid": "4420", 00:20:23.140 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:23.140 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:23.140 "hdgst": false, 00:20:23.140 "ddgst": false 00:20:23.140 }, 00:20:23.140 "method": "bdev_nvme_attach_controller" 00:20:23.140 },{ 00:20:23.140 "params": { 00:20:23.140 "name": "Nvme10", 00:20:23.140 "trtype": "tcp", 00:20:23.140 "traddr": "10.0.0.2", 00:20:23.140 "adrfam": "ipv4", 00:20:23.140 "trsvcid": "4420", 00:20:23.140 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:23.140 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:23.140 "hdgst": false, 00:20:23.140 "ddgst": false 00:20:23.140 }, 00:20:23.140 "method": "bdev_nvme_attach_controller" 00:20:23.140 }' 00:20:23.140 [2024-12-11 14:57:05.673420] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:20:23.140 [2024-12-11 14:57:05.673498] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid715855 ] 00:20:23.140 [2024-12-11 14:57:05.747434] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:23.140 [2024-12-11 14:57:05.807061] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:24.519 Running I/O for 1 seconds... 00:20:25.894 1805.00 IOPS, 112.81 MiB/s 00:20:25.894 Latency(us) 00:20:25.894 [2024-12-11T13:57:08.667Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:25.894 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:25.894 Verification LBA range: start 0x0 length 0x400 00:20:25.894 Nvme1n1 : 1.11 233.57 14.60 0.00 0.00 269905.63 3179.71 239230.67 00:20:25.894 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:25.894 Verification LBA range: start 0x0 length 0x400 00:20:25.894 Nvme2n1 : 1.12 227.87 14.24 0.00 0.00 273564.63 21068.61 257872.02 00:20:25.894 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:25.894 Verification LBA range: start 0x0 length 0x400 00:20:25.894 Nvme3n1 : 1.10 233.79 14.61 0.00 0.00 261365.38 18835.53 254765.13 00:20:25.894 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:25.894 Verification LBA range: start 0x0 length 0x400 00:20:25.894 Nvme4n1 : 1.11 231.62 14.48 0.00 0.00 259232.43 18835.53 262532.36 00:20:25.894 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:25.894 Verification LBA range: start 0x0 length 0x400 00:20:25.894 Nvme5n1 : 1.13 235.18 14.70 0.00 0.00 250085.56 7670.14 256318.58 00:20:25.894 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:25.894 Verification LBA range: start 0x0 length 0x400 00:20:25.894 Nvme6n1 : 1.14 228.24 14.26 0.00 0.00 254951.56 2767.08 264085.81 00:20:25.894 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:25.894 Verification LBA range: start 0x0 length 0x400 00:20:25.894 Nvme7n1 : 1.12 232.03 14.50 0.00 0.00 244651.60 5485.61 259425.47 00:20:25.894 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:25.894 Verification LBA range: start 0x0 length 0x400 00:20:25.894 Nvme8n1 : 1.13 228.33 14.27 0.00 0.00 245997.86 1480.63 260978.92 00:20:25.894 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:25.894 Verification LBA range: start 0x0 length 0x400 00:20:25.894 Nvme9n1 : 1.19 270.00 16.87 0.00 0.00 205952.72 6262.33 279620.27 00:20:25.894 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:25.894 Verification LBA range: start 0x0 length 0x400 00:20:25.894 Nvme10n1 : 1.17 225.85 14.12 0.00 0.00 240658.72 1171.15 274959.93 00:20:25.894 [2024-12-11T13:57:08.667Z] =================================================================================================================== 00:20:25.894 [2024-12-11T13:57:08.667Z] Total : 2346.46 146.65 0.00 0.00 249547.39 1171.15 279620.27 00:20:26.152 14:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:20:26.152 14:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:20:26.152 14:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:26.152 14:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:26.152 14:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:20:26.152 14:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:26.152 14:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:20:26.152 14:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:26.152 14:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:20:26.152 14:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:26.152 14:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:26.152 rmmod nvme_tcp 00:20:26.152 rmmod nvme_fabrics 00:20:26.152 rmmod nvme_keyring 00:20:26.152 14:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:26.152 14:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:20:26.152 14:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:20:26.152 14:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 715365 ']' 00:20:26.152 14:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 715365 00:20:26.152 14:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 715365 ']' 00:20:26.152 14:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 715365 00:20:26.152 14:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:20:26.152 14:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:26.152 14:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 715365 00:20:26.152 14:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:26.152 14:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:26.152 14:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 715365' 00:20:26.152 killing process with pid 715365 00:20:26.152 14:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 715365 00:20:26.152 14:57:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 715365 00:20:26.719 14:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:26.719 14:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:26.719 14:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:26.719 14:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:20:26.719 14:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:20:26.719 14:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:20:26.719 14:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:26.719 14:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:26.719 14:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:26.719 14:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:26.719 14:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:26.719 14:57:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:28.627 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:28.627 00:20:28.627 real 0m11.888s 00:20:28.627 user 0m34.261s 00:20:28.627 sys 0m3.281s 00:20:28.627 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:28.627 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:28.627 ************************************ 00:20:28.627 END TEST nvmf_shutdown_tc1 00:20:28.627 ************************************ 00:20:28.627 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:20:28.627 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:28.627 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:28.627 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:28.627 ************************************ 00:20:28.627 START TEST nvmf_shutdown_tc2 00:20:28.627 ************************************ 00:20:28.627 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:20:28.627 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:20:28.627 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:20:28.627 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:28.627 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:28.627 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:28.627 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:28.627 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:28.627 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:28.627 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:28.627 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:28.627 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:28.627 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:28.627 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:20:28.627 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:28.627 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:28.627 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:20:28.627 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:28.627 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:28.627 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:28.627 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:28.627 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:28.627 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:20:28.627 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:28.627 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:20:28.627 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:20:28.627 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:20:28.627 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:20:28.627 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:20:28.627 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:20:28.627 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:28.627 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:28.627 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:28.627 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:28.627 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:28.627 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:28.627 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:28.627 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:28.627 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:28.627 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:28.627 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:28.627 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:28.627 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:28.627 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:28.627 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:28.627 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:28.627 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:28.627 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:28.627 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:28.627 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:28.627 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:28.627 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:28.627 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:28.627 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:28.627 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:28.627 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:28.628 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:28.628 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:28.628 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:28.628 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:28.628 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:28.628 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:28.628 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:28.628 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:28.628 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:28.628 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:28.628 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:28.628 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:28.628 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:28.628 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:28.628 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:28.628 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:28.628 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:28.628 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:28.628 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:28.628 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:28.628 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:28.628 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:28.628 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:28.628 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:28.628 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:28.628 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:28.628 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:28.628 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:28.628 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:28.628 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:28.628 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:28.628 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:28.628 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:20:28.628 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:28.628 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:28.628 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:28.628 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:28.628 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:28.628 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:28.628 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:28.628 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:28.628 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:28.628 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:28.628 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:28.628 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:28.628 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:28.628 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:28.628 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:28.628 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:28.628 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:28.628 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:28.887 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:28.887 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:28.887 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:28.887 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:28.887 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:28.887 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:28.887 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:28.887 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:28.887 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:28.887 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.223 ms 00:20:28.887 00:20:28.887 --- 10.0.0.2 ping statistics --- 00:20:28.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:28.887 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:20:28.887 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:28.887 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:28.887 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.079 ms 00:20:28.887 00:20:28.887 --- 10.0.0.1 ping statistics --- 00:20:28.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:28.887 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:20:28.887 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:28.887 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:20:28.887 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:28.887 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:28.887 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:28.887 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:28.887 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:28.887 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:28.887 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:28.887 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:20:28.887 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:28.887 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:28.887 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:28.887 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=717132 00:20:28.887 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:28.887 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 717132 00:20:28.887 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 717132 ']' 00:20:28.887 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:28.887 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:28.887 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:28.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:28.887 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:28.887 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:28.887 [2024-12-11 14:57:11.579360] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:20:28.887 [2024-12-11 14:57:11.579449] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:28.887 [2024-12-11 14:57:11.653285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:29.146 [2024-12-11 14:57:11.709707] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:29.146 [2024-12-11 14:57:11.709764] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:29.146 [2024-12-11 14:57:11.709793] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:29.146 [2024-12-11 14:57:11.709804] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:29.146 [2024-12-11 14:57:11.709814] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:29.146 [2024-12-11 14:57:11.711296] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:20:29.146 [2024-12-11 14:57:11.711403] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:20:29.146 [2024-12-11 14:57:11.711533] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:20:29.146 [2024-12-11 14:57:11.711536] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:20:29.146 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:29.146 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:20:29.146 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:29.146 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:29.146 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:29.146 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:29.146 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:29.146 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.146 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:29.146 [2024-12-11 14:57:11.888757] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:29.146 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.146 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:20:29.146 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:20:29.146 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:29.146 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:29.146 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:29.146 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:29.146 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:29.146 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:29.146 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:29.146 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:29.146 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:29.146 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:29.146 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:29.146 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:29.146 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:29.406 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:29.406 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:29.406 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:29.406 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:29.406 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:29.406 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:29.406 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:29.406 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:29.406 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:29.406 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:29.406 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:20:29.406 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.406 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:29.406 Malloc1 00:20:29.406 [2024-12-11 14:57:11.992837] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:29.406 Malloc2 00:20:29.406 Malloc3 00:20:29.406 Malloc4 00:20:29.406 Malloc5 00:20:29.663 Malloc6 00:20:29.664 Malloc7 00:20:29.664 Malloc8 00:20:29.664 Malloc9 00:20:29.664 Malloc10 00:20:29.922 14:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.922 14:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:20:29.922 14:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:29.922 14:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:29.922 14:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=717310 00:20:29.922 14:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 717310 /var/tmp/bdevperf.sock 00:20:29.922 14:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 717310 ']' 00:20:29.922 14:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:29.922 14:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:29.922 14:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:29.922 14:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:29.922 14:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:29.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:29.922 14:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:20:29.922 14:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:29.922 14:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:20:29.922 14:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:29.922 14:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:29.922 14:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:29.922 { 00:20:29.922 "params": { 00:20:29.922 "name": "Nvme$subsystem", 00:20:29.922 "trtype": "$TEST_TRANSPORT", 00:20:29.922 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:29.922 "adrfam": "ipv4", 00:20:29.922 "trsvcid": "$NVMF_PORT", 00:20:29.922 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:29.922 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:29.922 "hdgst": ${hdgst:-false}, 00:20:29.922 "ddgst": ${ddgst:-false} 00:20:29.922 }, 00:20:29.922 "method": "bdev_nvme_attach_controller" 00:20:29.922 } 00:20:29.922 EOF 00:20:29.922 )") 00:20:29.922 14:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:29.922 14:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:29.922 14:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:29.922 { 00:20:29.922 "params": { 00:20:29.922 "name": "Nvme$subsystem", 00:20:29.922 "trtype": "$TEST_TRANSPORT", 00:20:29.922 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:29.922 "adrfam": "ipv4", 00:20:29.922 "trsvcid": "$NVMF_PORT", 00:20:29.922 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:29.922 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:29.922 "hdgst": ${hdgst:-false}, 00:20:29.922 "ddgst": ${ddgst:-false} 00:20:29.922 }, 00:20:29.922 "method": "bdev_nvme_attach_controller" 00:20:29.922 } 00:20:29.922 EOF 00:20:29.922 )") 00:20:29.922 14:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:29.922 14:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:29.922 14:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:29.922 { 00:20:29.922 "params": { 00:20:29.922 "name": "Nvme$subsystem", 00:20:29.922 "trtype": "$TEST_TRANSPORT", 00:20:29.923 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:29.923 "adrfam": "ipv4", 00:20:29.923 "trsvcid": "$NVMF_PORT", 00:20:29.923 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:29.923 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:29.923 "hdgst": ${hdgst:-false}, 00:20:29.923 "ddgst": ${ddgst:-false} 00:20:29.923 }, 00:20:29.923 "method": "bdev_nvme_attach_controller" 00:20:29.923 } 00:20:29.923 EOF 00:20:29.923 )") 00:20:29.923 14:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:29.923 14:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:29.923 14:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:29.923 { 00:20:29.923 "params": { 00:20:29.923 "name": "Nvme$subsystem", 00:20:29.923 "trtype": "$TEST_TRANSPORT", 00:20:29.923 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:29.923 "adrfam": "ipv4", 00:20:29.923 "trsvcid": "$NVMF_PORT", 00:20:29.923 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:29.923 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:29.923 "hdgst": ${hdgst:-false}, 00:20:29.923 "ddgst": ${ddgst:-false} 00:20:29.923 }, 00:20:29.923 "method": "bdev_nvme_attach_controller" 00:20:29.923 } 00:20:29.923 EOF 00:20:29.923 )") 00:20:29.923 14:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:29.923 14:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:29.923 14:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:29.923 { 00:20:29.923 "params": { 00:20:29.923 "name": "Nvme$subsystem", 00:20:29.923 "trtype": "$TEST_TRANSPORT", 00:20:29.923 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:29.923 "adrfam": "ipv4", 00:20:29.923 "trsvcid": "$NVMF_PORT", 00:20:29.923 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:29.923 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:29.923 "hdgst": ${hdgst:-false}, 00:20:29.923 "ddgst": ${ddgst:-false} 00:20:29.923 }, 00:20:29.923 "method": "bdev_nvme_attach_controller" 00:20:29.923 } 00:20:29.923 EOF 00:20:29.923 )") 00:20:29.923 14:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:29.923 14:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:29.923 14:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:29.923 { 00:20:29.923 "params": { 00:20:29.923 "name": "Nvme$subsystem", 00:20:29.923 "trtype": "$TEST_TRANSPORT", 00:20:29.923 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:29.923 "adrfam": "ipv4", 00:20:29.923 "trsvcid": "$NVMF_PORT", 00:20:29.923 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:29.923 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:29.923 "hdgst": ${hdgst:-false}, 00:20:29.923 "ddgst": ${ddgst:-false} 00:20:29.923 }, 00:20:29.923 "method": "bdev_nvme_attach_controller" 00:20:29.923 } 00:20:29.923 EOF 00:20:29.923 )") 00:20:29.923 14:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:29.923 14:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:29.923 14:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:29.923 { 00:20:29.923 "params": { 00:20:29.923 "name": "Nvme$subsystem", 00:20:29.923 "trtype": "$TEST_TRANSPORT", 00:20:29.923 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:29.923 "adrfam": "ipv4", 00:20:29.923 "trsvcid": "$NVMF_PORT", 00:20:29.923 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:29.923 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:29.923 "hdgst": ${hdgst:-false}, 00:20:29.923 "ddgst": ${ddgst:-false} 00:20:29.923 }, 00:20:29.923 "method": "bdev_nvme_attach_controller" 00:20:29.923 } 00:20:29.923 EOF 00:20:29.923 )") 00:20:29.923 14:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:29.923 14:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:29.923 14:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:29.923 { 00:20:29.923 "params": { 00:20:29.923 "name": "Nvme$subsystem", 00:20:29.923 "trtype": "$TEST_TRANSPORT", 00:20:29.923 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:29.923 "adrfam": "ipv4", 00:20:29.923 "trsvcid": "$NVMF_PORT", 00:20:29.923 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:29.923 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:29.923 "hdgst": ${hdgst:-false}, 00:20:29.923 "ddgst": ${ddgst:-false} 00:20:29.923 }, 00:20:29.923 "method": "bdev_nvme_attach_controller" 00:20:29.923 } 00:20:29.923 EOF 00:20:29.923 )") 00:20:29.923 14:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:29.923 14:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:29.923 14:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:29.923 { 00:20:29.923 "params": { 00:20:29.923 "name": "Nvme$subsystem", 00:20:29.923 "trtype": "$TEST_TRANSPORT", 00:20:29.923 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:29.923 "adrfam": "ipv4", 00:20:29.923 "trsvcid": "$NVMF_PORT", 00:20:29.923 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:29.923 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:29.923 "hdgst": ${hdgst:-false}, 00:20:29.923 "ddgst": ${ddgst:-false} 00:20:29.923 }, 00:20:29.923 "method": "bdev_nvme_attach_controller" 00:20:29.923 } 00:20:29.923 EOF 00:20:29.923 )") 00:20:29.923 14:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:29.923 14:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:29.923 14:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:29.923 { 00:20:29.923 "params": { 00:20:29.923 "name": "Nvme$subsystem", 00:20:29.923 "trtype": "$TEST_TRANSPORT", 00:20:29.923 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:29.923 "adrfam": "ipv4", 00:20:29.923 "trsvcid": "$NVMF_PORT", 00:20:29.923 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:29.923 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:29.923 "hdgst": ${hdgst:-false}, 00:20:29.923 "ddgst": ${ddgst:-false} 00:20:29.923 }, 00:20:29.923 "method": "bdev_nvme_attach_controller" 00:20:29.923 } 00:20:29.923 EOF 00:20:29.923 )") 00:20:29.923 14:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:29.923 14:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:20:29.923 14:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:20:29.923 14:57:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:29.923 "params": { 00:20:29.923 "name": "Nvme1", 00:20:29.923 "trtype": "tcp", 00:20:29.923 "traddr": "10.0.0.2", 00:20:29.923 "adrfam": "ipv4", 00:20:29.923 "trsvcid": "4420", 00:20:29.923 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:29.923 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:29.923 "hdgst": false, 00:20:29.923 "ddgst": false 00:20:29.923 }, 00:20:29.923 "method": "bdev_nvme_attach_controller" 00:20:29.923 },{ 00:20:29.923 "params": { 00:20:29.923 "name": "Nvme2", 00:20:29.923 "trtype": "tcp", 00:20:29.923 "traddr": "10.0.0.2", 00:20:29.923 "adrfam": "ipv4", 00:20:29.923 "trsvcid": "4420", 00:20:29.923 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:29.923 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:29.923 "hdgst": false, 00:20:29.923 "ddgst": false 00:20:29.923 }, 00:20:29.923 "method": "bdev_nvme_attach_controller" 00:20:29.923 },{ 00:20:29.923 "params": { 00:20:29.923 "name": "Nvme3", 00:20:29.923 "trtype": "tcp", 00:20:29.923 "traddr": "10.0.0.2", 00:20:29.923 "adrfam": "ipv4", 00:20:29.923 "trsvcid": "4420", 00:20:29.923 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:29.923 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:29.923 "hdgst": false, 00:20:29.923 "ddgst": false 00:20:29.923 }, 00:20:29.923 "method": "bdev_nvme_attach_controller" 00:20:29.923 },{ 00:20:29.923 "params": { 00:20:29.923 "name": "Nvme4", 00:20:29.923 "trtype": "tcp", 00:20:29.923 "traddr": "10.0.0.2", 00:20:29.923 "adrfam": "ipv4", 00:20:29.923 "trsvcid": "4420", 00:20:29.923 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:29.923 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:29.923 "hdgst": false, 00:20:29.923 "ddgst": false 00:20:29.923 }, 00:20:29.923 "method": "bdev_nvme_attach_controller" 00:20:29.923 },{ 00:20:29.923 "params": { 00:20:29.923 "name": "Nvme5", 00:20:29.923 "trtype": "tcp", 00:20:29.923 "traddr": "10.0.0.2", 00:20:29.923 "adrfam": "ipv4", 00:20:29.923 "trsvcid": "4420", 00:20:29.923 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:29.923 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:29.923 "hdgst": false, 00:20:29.923 "ddgst": false 00:20:29.923 }, 00:20:29.923 "method": "bdev_nvme_attach_controller" 00:20:29.923 },{ 00:20:29.923 "params": { 00:20:29.923 "name": "Nvme6", 00:20:29.923 "trtype": "tcp", 00:20:29.923 "traddr": "10.0.0.2", 00:20:29.923 "adrfam": "ipv4", 00:20:29.923 "trsvcid": "4420", 00:20:29.923 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:29.923 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:29.923 "hdgst": false, 00:20:29.923 "ddgst": false 00:20:29.923 }, 00:20:29.923 "method": "bdev_nvme_attach_controller" 00:20:29.923 },{ 00:20:29.923 "params": { 00:20:29.923 "name": "Nvme7", 00:20:29.923 "trtype": "tcp", 00:20:29.923 "traddr": "10.0.0.2", 00:20:29.923 "adrfam": "ipv4", 00:20:29.923 "trsvcid": "4420", 00:20:29.924 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:29.924 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:29.924 "hdgst": false, 00:20:29.924 "ddgst": false 00:20:29.924 }, 00:20:29.924 "method": "bdev_nvme_attach_controller" 00:20:29.924 },{ 00:20:29.924 "params": { 00:20:29.924 "name": "Nvme8", 00:20:29.924 "trtype": "tcp", 00:20:29.924 "traddr": "10.0.0.2", 00:20:29.924 "adrfam": "ipv4", 00:20:29.924 "trsvcid": "4420", 00:20:29.924 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:29.924 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:29.924 "hdgst": false, 00:20:29.924 "ddgst": false 00:20:29.924 }, 00:20:29.924 "method": "bdev_nvme_attach_controller" 00:20:29.924 },{ 00:20:29.924 "params": { 00:20:29.924 "name": "Nvme9", 00:20:29.924 "trtype": "tcp", 00:20:29.924 "traddr": "10.0.0.2", 00:20:29.924 "adrfam": "ipv4", 00:20:29.924 "trsvcid": "4420", 00:20:29.924 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:29.924 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:29.924 "hdgst": false, 00:20:29.924 "ddgst": false 00:20:29.924 }, 00:20:29.924 "method": "bdev_nvme_attach_controller" 00:20:29.924 },{ 00:20:29.924 "params": { 00:20:29.924 "name": "Nvme10", 00:20:29.924 "trtype": "tcp", 00:20:29.924 "traddr": "10.0.0.2", 00:20:29.924 "adrfam": "ipv4", 00:20:29.924 "trsvcid": "4420", 00:20:29.924 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:29.924 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:29.924 "hdgst": false, 00:20:29.924 "ddgst": false 00:20:29.924 }, 00:20:29.924 "method": "bdev_nvme_attach_controller" 00:20:29.924 }' 00:20:29.924 [2024-12-11 14:57:12.509760] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:20:29.924 [2024-12-11 14:57:12.509869] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid717310 ] 00:20:29.924 [2024-12-11 14:57:12.585369] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:29.924 [2024-12-11 14:57:12.644484] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:31.826 Running I/O for 10 seconds... 00:20:31.826 14:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:31.826 14:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:20:31.826 14:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:31.826 14:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.826 14:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:31.826 14:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.826 14:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:31.826 14:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:31.826 14:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:20:31.826 14:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:20:31.826 14:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:20:31.826 14:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:20:31.826 14:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:31.826 14:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:31.826 14:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:31.826 14:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.826 14:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:32.085 14:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.085 14:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:20:32.085 14:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:20:32.085 14:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:20:32.348 14:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:20:32.348 14:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:32.348 14:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:32.349 14:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:32.349 14:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.349 14:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:32.349 14:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.349 14:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:20:32.349 14:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:20:32.349 14:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:20:32.349 14:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:20:32.349 14:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:20:32.349 14:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 717310 00:20:32.349 14:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 717310 ']' 00:20:32.349 14:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 717310 00:20:32.349 14:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:20:32.349 14:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:32.349 14:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 717310 00:20:32.349 14:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:32.349 14:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:32.349 14:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 717310' 00:20:32.349 killing process with pid 717310 00:20:32.349 14:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 717310 00:20:32.349 14:57:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 717310 00:20:32.349 Received shutdown signal, test time was about 0.905533 seconds 00:20:32.349 00:20:32.349 Latency(us) 00:20:32.349 [2024-12-11T13:57:15.122Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:32.349 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:32.349 Verification LBA range: start 0x0 length 0x400 00:20:32.349 Nvme1n1 : 0.88 217.83 13.61 0.00 0.00 290281.75 18641.35 254765.13 00:20:32.349 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:32.349 Verification LBA range: start 0x0 length 0x400 00:20:32.349 Nvme2n1 : 0.89 215.83 13.49 0.00 0.00 286829.73 21554.06 256318.58 00:20:32.349 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:32.349 Verification LBA range: start 0x0 length 0x400 00:20:32.349 Nvme3n1 : 0.90 282.96 17.69 0.00 0.00 213857.28 15340.28 250104.79 00:20:32.349 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:32.349 Verification LBA range: start 0x0 length 0x400 00:20:32.349 Nvme4n1 : 0.90 284.49 17.78 0.00 0.00 208537.03 18932.62 246997.90 00:20:32.349 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:32.349 Verification LBA range: start 0x0 length 0x400 00:20:32.349 Nvme5n1 : 0.87 241.71 15.11 0.00 0.00 234407.33 9077.95 248551.35 00:20:32.349 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:32.349 Verification LBA range: start 0x0 length 0x400 00:20:32.349 Nvme6n1 : 0.86 224.09 14.01 0.00 0.00 251699.96 21359.88 251658.24 00:20:32.349 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:32.349 Verification LBA range: start 0x0 length 0x400 00:20:32.349 Nvme7n1 : 0.88 218.86 13.68 0.00 0.00 252588.18 32622.36 237677.23 00:20:32.349 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:32.349 Verification LBA range: start 0x0 length 0x400 00:20:32.349 Nvme8n1 : 0.87 225.89 14.12 0.00 0.00 237176.73 1711.22 228356.55 00:20:32.349 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:32.349 Verification LBA range: start 0x0 length 0x400 00:20:32.349 Nvme9n1 : 0.89 215.19 13.45 0.00 0.00 245877.82 19612.25 257872.02 00:20:32.349 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:32.349 Verification LBA range: start 0x0 length 0x400 00:20:32.349 Nvme10n1 : 0.90 213.95 13.37 0.00 0.00 241691.81 22136.60 285834.05 00:20:32.349 [2024-12-11T13:57:15.122Z] =================================================================================================================== 00:20:32.349 [2024-12-11T13:57:15.122Z] Total : 2340.82 146.30 0.00 0.00 243998.85 1711.22 285834.05 00:20:32.643 14:57:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:20:33.594 14:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 717132 00:20:33.595 14:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:20:33.595 14:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:20:33.595 14:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:33.595 14:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:33.595 14:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:20:33.595 14:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:33.595 14:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:20:33.595 14:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:33.595 14:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:20:33.595 14:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:33.595 14:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:33.595 rmmod nvme_tcp 00:20:33.595 rmmod nvme_fabrics 00:20:33.595 rmmod nvme_keyring 00:20:33.595 14:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:33.595 14:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:20:33.595 14:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:20:33.595 14:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 717132 ']' 00:20:33.595 14:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 717132 00:20:33.595 14:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 717132 ']' 00:20:33.595 14:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 717132 00:20:33.595 14:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:20:33.595 14:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:33.595 14:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 717132 00:20:33.855 14:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:33.855 14:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:33.855 14:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 717132' 00:20:33.855 killing process with pid 717132 00:20:33.855 14:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 717132 00:20:33.855 14:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 717132 00:20:34.115 14:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:34.115 14:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:34.115 14:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:34.115 14:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:20:34.115 14:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:20:34.115 14:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:34.115 14:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:20:34.115 14:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:34.115 14:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:34.115 14:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:34.115 14:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:34.115 14:57:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:36.655 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:36.655 00:20:36.655 real 0m7.580s 00:20:36.655 user 0m22.994s 00:20:36.655 sys 0m1.455s 00:20:36.655 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:36.655 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:36.655 ************************************ 00:20:36.655 END TEST nvmf_shutdown_tc2 00:20:36.655 ************************************ 00:20:36.655 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:20:36.655 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:36.655 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:36.655 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:36.655 ************************************ 00:20:36.655 START TEST nvmf_shutdown_tc3 00:20:36.655 ************************************ 00:20:36.655 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:20:36.655 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:20:36.655 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:20:36.655 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:36.655 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:36.655 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:36.655 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:36.655 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:36.655 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:36.655 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:36.655 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:36.655 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:36.655 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:36.655 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:20:36.655 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:36.655 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:36.655 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:20:36.655 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:36.655 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:36.655 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:36.655 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:36.655 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:36.655 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:20:36.655 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:36.655 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:20:36.655 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:20:36.655 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:20:36.655 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:20:36.655 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:20:36.655 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:20:36.655 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:36.655 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:36.655 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:36.655 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:36.655 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:36.655 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:36.655 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:36.655 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:36.655 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:36.655 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:36.655 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:36.655 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:36.655 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:36.655 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:36.655 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:36.655 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:36.655 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:36.655 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:36.655 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:36.655 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:36.655 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:36.655 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:36.655 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:36.655 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:36.655 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:36.655 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:36.655 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:36.655 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:36.655 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:36.655 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:36.655 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:36.655 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:36.655 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:36.655 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:36.655 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:36.655 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:36.655 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:36.655 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:36.655 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:36.655 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:36.655 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:36.655 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:36.655 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:36.655 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:36.656 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:36.656 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:36.656 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:36.656 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:36.656 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:36.656 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:36.656 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:36.656 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:36.656 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:36.656 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:36.656 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:36.656 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:36.656 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:36.656 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:36.656 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:20:36.656 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:36.656 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:36.656 14:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:36.656 14:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:36.656 14:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:36.656 14:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:36.656 14:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:36.656 14:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:36.656 14:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:36.656 14:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:36.656 14:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:36.656 14:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:36.656 14:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:36.656 14:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:36.656 14:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:36.656 14:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:36.656 14:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:36.656 14:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:36.656 14:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:36.656 14:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:36.656 14:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:36.656 14:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:36.656 14:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:36.656 14:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:36.656 14:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:36.656 14:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:36.656 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:36.656 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:20:36.656 00:20:36.656 --- 10.0.0.2 ping statistics --- 00:20:36.656 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:36.656 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:20:36.656 14:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:36.656 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:36.656 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:20:36.656 00:20:36.656 --- 10.0.0.1 ping statistics --- 00:20:36.656 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:36.656 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:20:36.656 14:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:36.656 14:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:20:36.656 14:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:36.656 14:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:36.656 14:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:36.656 14:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:36.656 14:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:36.656 14:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:36.656 14:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:36.656 14:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:20:36.656 14:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:36.656 14:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:36.656 14:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:36.656 14:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=718223 00:20:36.656 14:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:36.656 14:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 718223 00:20:36.656 14:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 718223 ']' 00:20:36.656 14:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:36.656 14:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:36.656 14:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:36.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:36.656 14:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:36.656 14:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:36.656 [2024-12-11 14:57:19.206753] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:20:36.656 [2024-12-11 14:57:19.206844] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:36.656 [2024-12-11 14:57:19.278378] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:36.656 [2024-12-11 14:57:19.334394] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:36.656 [2024-12-11 14:57:19.334450] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:36.656 [2024-12-11 14:57:19.334478] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:36.656 [2024-12-11 14:57:19.334488] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:36.656 [2024-12-11 14:57:19.334498] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:36.656 [2024-12-11 14:57:19.335954] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:20:36.656 [2024-12-11 14:57:19.336018] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:20:36.656 [2024-12-11 14:57:19.336084] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:20:36.656 [2024-12-11 14:57:19.336087] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:20:36.915 14:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:36.915 14:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:20:36.915 14:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:36.915 14:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:36.915 14:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:36.915 14:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:36.915 14:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:36.915 14:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.915 14:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:36.915 [2024-12-11 14:57:19.475937] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:36.915 14:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.915 14:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:20:36.915 14:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:20:36.915 14:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:36.915 14:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:36.915 14:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:36.915 14:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:36.915 14:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:36.915 14:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:36.915 14:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:36.915 14:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:36.915 14:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:36.915 14:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:36.915 14:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:36.915 14:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:36.915 14:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:36.915 14:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:36.915 14:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:36.915 14:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:36.915 14:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:36.915 14:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:36.915 14:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:36.915 14:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:36.915 14:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:36.915 14:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:36.915 14:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:36.915 14:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:20:36.915 14:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.915 14:57:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:36.915 Malloc1 00:20:36.915 [2024-12-11 14:57:19.569257] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:36.915 Malloc2 00:20:36.915 Malloc3 00:20:37.175 Malloc4 00:20:37.175 Malloc5 00:20:37.175 Malloc6 00:20:37.175 Malloc7 00:20:37.175 Malloc8 00:20:37.175 Malloc9 00:20:37.434 Malloc10 00:20:37.434 14:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.434 14:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:20:37.434 14:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:37.434 14:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:37.434 14:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=718399 00:20:37.434 14:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 718399 /var/tmp/bdevperf.sock 00:20:37.434 14:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 718399 ']' 00:20:37.434 14:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:37.434 14:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:37.434 14:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:37.434 14:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:37.434 14:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:20:37.434 14:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:37.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:37.434 14:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:20:37.434 14:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:37.434 14:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:37.434 14:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:37.434 14:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:37.434 { 00:20:37.434 "params": { 00:20:37.434 "name": "Nvme$subsystem", 00:20:37.434 "trtype": "$TEST_TRANSPORT", 00:20:37.434 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:37.434 "adrfam": "ipv4", 00:20:37.434 "trsvcid": "$NVMF_PORT", 00:20:37.434 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:37.434 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:37.434 "hdgst": ${hdgst:-false}, 00:20:37.434 "ddgst": ${ddgst:-false} 00:20:37.434 }, 00:20:37.434 "method": "bdev_nvme_attach_controller" 00:20:37.434 } 00:20:37.434 EOF 00:20:37.434 )") 00:20:37.434 14:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:37.434 14:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:37.434 14:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:37.434 { 00:20:37.434 "params": { 00:20:37.434 "name": "Nvme$subsystem", 00:20:37.434 "trtype": "$TEST_TRANSPORT", 00:20:37.434 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:37.434 "adrfam": "ipv4", 00:20:37.434 "trsvcid": "$NVMF_PORT", 00:20:37.434 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:37.434 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:37.434 "hdgst": ${hdgst:-false}, 00:20:37.434 "ddgst": ${ddgst:-false} 00:20:37.434 }, 00:20:37.434 "method": "bdev_nvme_attach_controller" 00:20:37.434 } 00:20:37.434 EOF 00:20:37.434 )") 00:20:37.434 14:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:37.434 14:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:37.434 14:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:37.434 { 00:20:37.434 "params": { 00:20:37.434 "name": "Nvme$subsystem", 00:20:37.434 "trtype": "$TEST_TRANSPORT", 00:20:37.434 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:37.434 "adrfam": "ipv4", 00:20:37.434 "trsvcid": "$NVMF_PORT", 00:20:37.434 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:37.434 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:37.434 "hdgst": ${hdgst:-false}, 00:20:37.434 "ddgst": ${ddgst:-false} 00:20:37.434 }, 00:20:37.434 "method": "bdev_nvme_attach_controller" 00:20:37.434 } 00:20:37.434 EOF 00:20:37.434 )") 00:20:37.434 14:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:37.434 14:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:37.434 14:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:37.434 { 00:20:37.434 "params": { 00:20:37.434 "name": "Nvme$subsystem", 00:20:37.434 "trtype": "$TEST_TRANSPORT", 00:20:37.434 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:37.434 "adrfam": "ipv4", 00:20:37.434 "trsvcid": "$NVMF_PORT", 00:20:37.434 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:37.434 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:37.434 "hdgst": ${hdgst:-false}, 00:20:37.434 "ddgst": ${ddgst:-false} 00:20:37.434 }, 00:20:37.434 "method": "bdev_nvme_attach_controller" 00:20:37.434 } 00:20:37.435 EOF 00:20:37.435 )") 00:20:37.435 14:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:37.435 14:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:37.435 14:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:37.435 { 00:20:37.435 "params": { 00:20:37.435 "name": "Nvme$subsystem", 00:20:37.435 "trtype": "$TEST_TRANSPORT", 00:20:37.435 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:37.435 "adrfam": "ipv4", 00:20:37.435 "trsvcid": "$NVMF_PORT", 00:20:37.435 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:37.435 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:37.435 "hdgst": ${hdgst:-false}, 00:20:37.435 "ddgst": ${ddgst:-false} 00:20:37.435 }, 00:20:37.435 "method": "bdev_nvme_attach_controller" 00:20:37.435 } 00:20:37.435 EOF 00:20:37.435 )") 00:20:37.435 14:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:37.435 14:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:37.435 14:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:37.435 { 00:20:37.435 "params": { 00:20:37.435 "name": "Nvme$subsystem", 00:20:37.435 "trtype": "$TEST_TRANSPORT", 00:20:37.435 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:37.435 "adrfam": "ipv4", 00:20:37.435 "trsvcid": "$NVMF_PORT", 00:20:37.435 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:37.435 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:37.435 "hdgst": ${hdgst:-false}, 00:20:37.435 "ddgst": ${ddgst:-false} 00:20:37.435 }, 00:20:37.435 "method": "bdev_nvme_attach_controller" 00:20:37.435 } 00:20:37.435 EOF 00:20:37.435 )") 00:20:37.435 14:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:37.435 14:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:37.435 14:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:37.435 { 00:20:37.435 "params": { 00:20:37.435 "name": "Nvme$subsystem", 00:20:37.435 "trtype": "$TEST_TRANSPORT", 00:20:37.435 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:37.435 "adrfam": "ipv4", 00:20:37.435 "trsvcid": "$NVMF_PORT", 00:20:37.435 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:37.435 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:37.435 "hdgst": ${hdgst:-false}, 00:20:37.435 "ddgst": ${ddgst:-false} 00:20:37.435 }, 00:20:37.435 "method": "bdev_nvme_attach_controller" 00:20:37.435 } 00:20:37.435 EOF 00:20:37.435 )") 00:20:37.435 14:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:37.435 14:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:37.435 14:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:37.435 { 00:20:37.435 "params": { 00:20:37.435 "name": "Nvme$subsystem", 00:20:37.435 "trtype": "$TEST_TRANSPORT", 00:20:37.435 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:37.435 "adrfam": "ipv4", 00:20:37.435 "trsvcid": "$NVMF_PORT", 00:20:37.435 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:37.435 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:37.435 "hdgst": ${hdgst:-false}, 00:20:37.435 "ddgst": ${ddgst:-false} 00:20:37.435 }, 00:20:37.435 "method": "bdev_nvme_attach_controller" 00:20:37.435 } 00:20:37.435 EOF 00:20:37.435 )") 00:20:37.435 14:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:37.435 14:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:37.435 14:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:37.435 { 00:20:37.435 "params": { 00:20:37.435 "name": "Nvme$subsystem", 00:20:37.435 "trtype": "$TEST_TRANSPORT", 00:20:37.435 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:37.435 "adrfam": "ipv4", 00:20:37.435 "trsvcid": "$NVMF_PORT", 00:20:37.435 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:37.435 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:37.435 "hdgst": ${hdgst:-false}, 00:20:37.435 "ddgst": ${ddgst:-false} 00:20:37.435 }, 00:20:37.435 "method": "bdev_nvme_attach_controller" 00:20:37.435 } 00:20:37.435 EOF 00:20:37.435 )") 00:20:37.435 14:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:37.435 14:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:37.435 14:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:37.435 { 00:20:37.435 "params": { 00:20:37.435 "name": "Nvme$subsystem", 00:20:37.435 "trtype": "$TEST_TRANSPORT", 00:20:37.435 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:37.435 "adrfam": "ipv4", 00:20:37.435 "trsvcid": "$NVMF_PORT", 00:20:37.435 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:37.435 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:37.435 "hdgst": ${hdgst:-false}, 00:20:37.435 "ddgst": ${ddgst:-false} 00:20:37.435 }, 00:20:37.435 "method": "bdev_nvme_attach_controller" 00:20:37.435 } 00:20:37.435 EOF 00:20:37.435 )") 00:20:37.435 14:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:37.435 14:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:20:37.435 14:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:20:37.435 14:57:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:37.435 "params": { 00:20:37.435 "name": "Nvme1", 00:20:37.435 "trtype": "tcp", 00:20:37.435 "traddr": "10.0.0.2", 00:20:37.435 "adrfam": "ipv4", 00:20:37.435 "trsvcid": "4420", 00:20:37.435 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:37.435 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:37.435 "hdgst": false, 00:20:37.435 "ddgst": false 00:20:37.435 }, 00:20:37.435 "method": "bdev_nvme_attach_controller" 00:20:37.435 },{ 00:20:37.435 "params": { 00:20:37.435 "name": "Nvme2", 00:20:37.435 "trtype": "tcp", 00:20:37.435 "traddr": "10.0.0.2", 00:20:37.435 "adrfam": "ipv4", 00:20:37.435 "trsvcid": "4420", 00:20:37.435 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:37.435 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:37.435 "hdgst": false, 00:20:37.435 "ddgst": false 00:20:37.435 }, 00:20:37.435 "method": "bdev_nvme_attach_controller" 00:20:37.435 },{ 00:20:37.435 "params": { 00:20:37.435 "name": "Nvme3", 00:20:37.435 "trtype": "tcp", 00:20:37.435 "traddr": "10.0.0.2", 00:20:37.435 "adrfam": "ipv4", 00:20:37.435 "trsvcid": "4420", 00:20:37.435 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:37.435 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:37.435 "hdgst": false, 00:20:37.435 "ddgst": false 00:20:37.435 }, 00:20:37.435 "method": "bdev_nvme_attach_controller" 00:20:37.435 },{ 00:20:37.435 "params": { 00:20:37.435 "name": "Nvme4", 00:20:37.435 "trtype": "tcp", 00:20:37.435 "traddr": "10.0.0.2", 00:20:37.435 "adrfam": "ipv4", 00:20:37.435 "trsvcid": "4420", 00:20:37.435 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:37.435 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:37.435 "hdgst": false, 00:20:37.435 "ddgst": false 00:20:37.435 }, 00:20:37.435 "method": "bdev_nvme_attach_controller" 00:20:37.435 },{ 00:20:37.435 "params": { 00:20:37.435 "name": "Nvme5", 00:20:37.435 "trtype": "tcp", 00:20:37.435 "traddr": "10.0.0.2", 00:20:37.435 "adrfam": "ipv4", 00:20:37.435 "trsvcid": "4420", 00:20:37.435 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:37.435 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:37.435 "hdgst": false, 00:20:37.435 "ddgst": false 00:20:37.435 }, 00:20:37.435 "method": "bdev_nvme_attach_controller" 00:20:37.435 },{ 00:20:37.435 "params": { 00:20:37.435 "name": "Nvme6", 00:20:37.435 "trtype": "tcp", 00:20:37.435 "traddr": "10.0.0.2", 00:20:37.435 "adrfam": "ipv4", 00:20:37.435 "trsvcid": "4420", 00:20:37.435 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:37.435 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:37.435 "hdgst": false, 00:20:37.435 "ddgst": false 00:20:37.435 }, 00:20:37.435 "method": "bdev_nvme_attach_controller" 00:20:37.435 },{ 00:20:37.435 "params": { 00:20:37.435 "name": "Nvme7", 00:20:37.435 "trtype": "tcp", 00:20:37.435 "traddr": "10.0.0.2", 00:20:37.435 "adrfam": "ipv4", 00:20:37.435 "trsvcid": "4420", 00:20:37.435 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:37.435 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:37.435 "hdgst": false, 00:20:37.435 "ddgst": false 00:20:37.435 }, 00:20:37.435 "method": "bdev_nvme_attach_controller" 00:20:37.435 },{ 00:20:37.435 "params": { 00:20:37.435 "name": "Nvme8", 00:20:37.435 "trtype": "tcp", 00:20:37.435 "traddr": "10.0.0.2", 00:20:37.435 "adrfam": "ipv4", 00:20:37.435 "trsvcid": "4420", 00:20:37.435 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:37.435 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:37.435 "hdgst": false, 00:20:37.435 "ddgst": false 00:20:37.435 }, 00:20:37.435 "method": "bdev_nvme_attach_controller" 00:20:37.436 },{ 00:20:37.436 "params": { 00:20:37.436 "name": "Nvme9", 00:20:37.436 "trtype": "tcp", 00:20:37.436 "traddr": "10.0.0.2", 00:20:37.436 "adrfam": "ipv4", 00:20:37.436 "trsvcid": "4420", 00:20:37.436 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:37.436 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:37.436 "hdgst": false, 00:20:37.436 "ddgst": false 00:20:37.436 }, 00:20:37.436 "method": "bdev_nvme_attach_controller" 00:20:37.436 },{ 00:20:37.436 "params": { 00:20:37.436 "name": "Nvme10", 00:20:37.436 "trtype": "tcp", 00:20:37.436 "traddr": "10.0.0.2", 00:20:37.436 "adrfam": "ipv4", 00:20:37.436 "trsvcid": "4420", 00:20:37.436 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:37.436 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:37.436 "hdgst": false, 00:20:37.436 "ddgst": false 00:20:37.436 }, 00:20:37.436 "method": "bdev_nvme_attach_controller" 00:20:37.436 }' 00:20:37.436 [2024-12-11 14:57:20.079227] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:20:37.436 [2024-12-11 14:57:20.079307] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid718399 ] 00:20:37.436 [2024-12-11 14:57:20.153162] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:37.696 [2024-12-11 14:57:20.213772] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:39.599 Running I/O for 10 seconds... 00:20:39.599 14:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:39.599 14:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:20:39.599 14:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:39.599 14:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.599 14:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:39.599 14:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.599 14:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:39.599 14:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:39.599 14:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:39.599 14:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:20:39.599 14:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:20:39.599 14:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:20:39.599 14:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:20:39.599 14:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:39.599 14:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:39.599 14:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:39.599 14:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.599 14:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:39.599 14:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.599 14:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:20:39.599 14:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:20:39.599 14:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:20:39.872 14:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:20:39.872 14:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:39.872 14:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:39.872 14:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:39.872 14:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.872 14:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:39.872 14:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.872 14:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=136 00:20:39.872 14:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 136 -ge 100 ']' 00:20:39.872 14:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:20:39.872 14:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:20:39.872 14:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:20:39.872 14:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 718223 00:20:39.872 14:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 718223 ']' 00:20:39.872 14:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 718223 00:20:39.872 14:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:20:39.872 14:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:39.872 14:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 718223 00:20:39.872 14:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:39.872 14:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:39.872 14:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 718223' 00:20:39.872 killing process with pid 718223 00:20:39.872 14:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 718223 00:20:39.872 14:57:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 718223 00:20:39.872 [2024-12-11 14:57:22.555140] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88cf0 is same with the state(6) to be set 00:20:39.872 [2024-12-11 14:57:22.555224] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88cf0 is same with the state(6) to be set 00:20:39.872 [2024-12-11 14:57:22.555257] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88cf0 is same with the state(6) to be set 00:20:39.872 [2024-12-11 14:57:22.555270] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88cf0 is same with the state(6) to be set 00:20:39.872 [2024-12-11 14:57:22.555282] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88cf0 is same with the state(6) to be set 00:20:39.872 [2024-12-11 14:57:22.555295] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88cf0 is same with the state(6) to be set 00:20:39.872 [2024-12-11 14:57:22.555307] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88cf0 is same with the state(6) to be set 00:20:39.872 [2024-12-11 14:57:22.555318] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88cf0 is same with the state(6) to be set 00:20:39.872 [2024-12-11 14:57:22.555330] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88cf0 is same with the state(6) to be set 00:20:39.872 [2024-12-11 14:57:22.555342] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88cf0 is same with the state(6) to be set 00:20:39.872 [2024-12-11 14:57:22.555354] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88cf0 is same with the state(6) to be set 00:20:39.872 [2024-12-11 14:57:22.555366] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88cf0 is same with the state(6) to be set 00:20:39.872 [2024-12-11 14:57:22.555378] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88cf0 is same with the state(6) to be set 00:20:39.872 [2024-12-11 14:57:22.555390] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88cf0 is same with the state(6) to be set 00:20:39.872 [2024-12-11 14:57:22.555402] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88cf0 is same with the state(6) to be set 00:20:39.872 [2024-12-11 14:57:22.555414] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88cf0 is same with the state(6) to be set 00:20:39.872 [2024-12-11 14:57:22.555425] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88cf0 is same with the state(6) to be set 00:20:39.872 [2024-12-11 14:57:22.555437] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88cf0 is same with the state(6) to be set 00:20:39.872 [2024-12-11 14:57:22.555448] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88cf0 is same with the state(6) to be set 00:20:39.872 [2024-12-11 14:57:22.555460] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88cf0 is same with the state(6) to be set 00:20:39.872 [2024-12-11 14:57:22.555482] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88cf0 is same with the state(6) to be set 00:20:39.872 [2024-12-11 14:57:22.555495] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88cf0 is same with the state(6) to be set 00:20:39.872 [2024-12-11 14:57:22.555506] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88cf0 is same with the state(6) to be set 00:20:39.872 [2024-12-11 14:57:22.555518] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88cf0 is same with the state(6) to be set 00:20:39.872 [2024-12-11 14:57:22.555539] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88cf0 is same with the state(6) to be set 00:20:39.872 [2024-12-11 14:57:22.555563] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88cf0 is same with the state(6) to be set 00:20:39.872 [2024-12-11 14:57:22.555576] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88cf0 is same with the state(6) to be set 00:20:39.872 [2024-12-11 14:57:22.555587] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88cf0 is same with the state(6) to be set 00:20:39.872 [2024-12-11 14:57:22.555598] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88cf0 is same with the state(6) to be set 00:20:39.872 [2024-12-11 14:57:22.555610] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88cf0 is same with the state(6) to be set 00:20:39.872 [2024-12-11 14:57:22.555622] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88cf0 is same with the state(6) to be set 00:20:39.872 [2024-12-11 14:57:22.555634] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88cf0 is same with the state(6) to be set 00:20:39.872 [2024-12-11 14:57:22.555645] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88cf0 is same with the state(6) to be set 00:20:39.872 [2024-12-11 14:57:22.555656] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88cf0 is same with the state(6) to be set 00:20:39.872 [2024-12-11 14:57:22.555675] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88cf0 is same with the state(6) to be set 00:20:39.872 [2024-12-11 14:57:22.555688] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88cf0 is same with the state(6) to be set 00:20:39.872 [2024-12-11 14:57:22.555700] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88cf0 is same with the state(6) to be set 00:20:39.872 [2024-12-11 14:57:22.555711] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88cf0 is same with the state(6) to be set 00:20:39.872 [2024-12-11 14:57:22.555723] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88cf0 is same with the state(6) to be set 00:20:39.872 [2024-12-11 14:57:22.555735] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88cf0 is same with the state(6) to be set 00:20:39.872 [2024-12-11 14:57:22.555746] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88cf0 is same with the state(6) to be set 00:20:39.872 [2024-12-11 14:57:22.555758] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88cf0 is same with the state(6) to be set 00:20:39.872 [2024-12-11 14:57:22.555769] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88cf0 is same with the state(6) to be set 00:20:39.872 [2024-12-11 14:57:22.555781] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88cf0 is same with the state(6) to be set 00:20:39.872 [2024-12-11 14:57:22.555793] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88cf0 is same with the state(6) to be set 00:20:39.872 [2024-12-11 14:57:22.555805] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88cf0 is same with the state(6) to be set 00:20:39.872 [2024-12-11 14:57:22.555816] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88cf0 is same with the state(6) to be set 00:20:39.872 [2024-12-11 14:57:22.555828] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88cf0 is same with the state(6) to be set 00:20:39.872 [2024-12-11 14:57:22.555852] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88cf0 is same with the state(6) to be set 00:20:39.872 [2024-12-11 14:57:22.555866] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88cf0 is same with the state(6) to be set 00:20:39.872 [2024-12-11 14:57:22.555878] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88cf0 is same with the state(6) to be set 00:20:39.872 [2024-12-11 14:57:22.555889] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88cf0 is same with the state(6) to be set 00:20:39.872 [2024-12-11 14:57:22.555900] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88cf0 is same with the state(6) to be set 00:20:39.872 [2024-12-11 14:57:22.555912] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88cf0 is same with the state(6) to be set 00:20:39.872 [2024-12-11 14:57:22.555923] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88cf0 is same with the state(6) to be set 00:20:39.872 [2024-12-11 14:57:22.555936] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88cf0 is same with the state(6) to be set 00:20:39.872 [2024-12-11 14:57:22.555947] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88cf0 is same with the state(6) to be set 00:20:39.872 [2024-12-11 14:57:22.555959] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88cf0 is same with the state(6) to be set 00:20:39.872 [2024-12-11 14:57:22.555970] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88cf0 is same with the state(6) to be set 00:20:39.873 [2024-12-11 14:57:22.555981] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88cf0 is same with the state(6) to be set 00:20:39.873 [2024-12-11 14:57:22.555993] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88cf0 is same with the state(6) to be set 00:20:39.873 [2024-12-11 14:57:22.556004] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88cf0 is same with the state(6) to be set 00:20:39.873 [2024-12-11 14:57:22.556016] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc88cf0 is same with the state(6) to be set 00:20:39.873 [2024-12-11 14:57:22.557664] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa1a120 is same with the state(6) to be set 00:20:39.873 [2024-12-11 14:57:22.557701] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa1a120 is same with the state(6) to be set 00:20:39.873 [2024-12-11 14:57:22.557718] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa1a120 is same with the state(6) to be set 00:20:39.873 [2024-12-11 14:57:22.557730] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa1a120 is same with the state(6) to be set 00:20:39.873 [2024-12-11 14:57:22.557742] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa1a120 is same with the state(6) to be set 00:20:39.873 [2024-12-11 14:57:22.557754] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa1a120 is same with the state(6) to be set 00:20:39.873 [2024-12-11 14:57:22.557765] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa1a120 is same with the state(6) to be set 00:20:39.873 [2024-12-11 14:57:22.557777] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa1a120 is same with the state(6) to be set 00:20:39.873 [2024-12-11 14:57:22.557789] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa1a120 is same with the state(6) to be set 00:20:39.873 [2024-12-11 14:57:22.557801] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa1a120 is same with the state(6) to be set 00:20:39.873 [2024-12-11 14:57:22.557812] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa1a120 is same with the state(6) to be set 00:20:39.873 [2024-12-11 14:57:22.557824] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa1a120 is same with the state(6) to be set 00:20:39.873 [2024-12-11 14:57:22.557842] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa1a120 is same with the state(6) to be set 00:20:39.873 [2024-12-11 14:57:22.557856] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa1a120 is same with the state(6) to be set 00:20:39.873 [2024-12-11 14:57:22.557868] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa1a120 is same with the state(6) to be set 00:20:39.873 [2024-12-11 14:57:22.557879] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa1a120 is same with the state(6) to be set 00:20:39.873 [2024-12-11 14:57:22.557891] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa1a120 is same with the state(6) to be set 00:20:39.873 [2024-12-11 14:57:22.557903] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa1a120 is same with the state(6) to be set 00:20:39.873 [2024-12-11 14:57:22.557915] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa1a120 is same with the state(6) to be set 00:20:39.873 [2024-12-11 14:57:22.557927] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa1a120 is same with the state(6) to be set 00:20:39.873 [2024-12-11 14:57:22.557939] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa1a120 is same with the state(6) to be set 00:20:39.873 [2024-12-11 14:57:22.557950] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa1a120 is same with the state(6) to be set 00:20:39.873 [2024-12-11 14:57:22.557962] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa1a120 is same with the state(6) to be set 00:20:39.873 [2024-12-11 14:57:22.557974] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa1a120 is same with the state(6) to be set 00:20:39.873 [2024-12-11 14:57:22.557986] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa1a120 is same with the state(6) to be set 00:20:39.873 [2024-12-11 14:57:22.557998] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa1a120 is same with the state(6) to be set 00:20:39.873 [2024-12-11 14:57:22.558010] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa1a120 is same with the state(6) to be set 00:20:39.873 [2024-12-11 14:57:22.558021] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa1a120 is same with the state(6) to be set 00:20:39.873 [2024-12-11 14:57:22.558032] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa1a120 is same with the state(6) to be set 00:20:39.873 [2024-12-11 14:57:22.558044] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa1a120 is same with the state(6) to be set 00:20:39.873 [2024-12-11 14:57:22.558055] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa1a120 is same with the state(6) to be set 00:20:39.873 [2024-12-11 14:57:22.558066] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa1a120 is same with the state(6) to be set 00:20:39.873 [2024-12-11 14:57:22.558078] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa1a120 is same with the state(6) to be set 00:20:39.873 [2024-12-11 14:57:22.558090] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa1a120 is same with the state(6) to be set 00:20:39.873 [2024-12-11 14:57:22.558101] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa1a120 is same with the state(6) to be set 00:20:39.873 [2024-12-11 14:57:22.558113] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa1a120 is same with the state(6) to be set 00:20:39.873 [2024-12-11 14:57:22.558124] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa1a120 is same with the state(6) to be set 00:20:39.873 [2024-12-11 14:57:22.558135] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa1a120 is same with the state(6) to be set 00:20:39.873 [2024-12-11 14:57:22.558147] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa1a120 is same with the state(6) to be set 00:20:39.873 [2024-12-11 14:57:22.558162] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa1a120 is same with the state(6) to be set 00:20:39.873 [2024-12-11 14:57:22.558175] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa1a120 is same with the state(6) to be set 00:20:39.873 [2024-12-11 14:57:22.558186] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa1a120 is same with the state(6) to be set 00:20:39.873 [2024-12-11 14:57:22.558198] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa1a120 is same with the state(6) to be set 00:20:39.873 [2024-12-11 14:57:22.558210] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa1a120 is same with the state(6) to be set 00:20:39.873 [2024-12-11 14:57:22.558221] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa1a120 is same with the state(6) to be set 00:20:39.873 [2024-12-11 14:57:22.558233] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa1a120 is same with the state(6) to be set 00:20:39.873 [2024-12-11 14:57:22.558244] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa1a120 is same with the state(6) to be set 00:20:39.873 [2024-12-11 14:57:22.558256] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa1a120 is same with the state(6) to be set 00:20:39.873 [2024-12-11 14:57:22.558267] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa1a120 is same with the state(6) to be set 00:20:39.873 [2024-12-11 14:57:22.558278] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa1a120 is same with the state(6) to be set 00:20:39.873 [2024-12-11 14:57:22.558290] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa1a120 is same with the state(6) to be set 00:20:39.873 [2024-12-11 14:57:22.558301] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa1a120 is same with the state(6) to be set 00:20:39.873 [2024-12-11 14:57:22.558313] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa1a120 is same with the state(6) to be set 00:20:39.873 [2024-12-11 14:57:22.558324] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa1a120 is same with the state(6) to be set 00:20:39.873 [2024-12-11 14:57:22.558335] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa1a120 is same with the state(6) to be set 00:20:39.873 [2024-12-11 14:57:22.558347] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa1a120 is same with the state(6) to be set 00:20:39.873 [2024-12-11 14:57:22.558358] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa1a120 is same with the state(6) to be set 00:20:39.873 [2024-12-11 14:57:22.558370] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa1a120 is same with the state(6) to be set 00:20:39.873 [2024-12-11 14:57:22.558382] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa1a120 is same with the state(6) to be set 00:20:39.873 [2024-12-11 14:57:22.558394] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa1a120 is same with the state(6) to be set 00:20:39.873 [2024-12-11 14:57:22.558405] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa1a120 is same with the state(6) to be set 00:20:39.873 [2024-12-11 14:57:22.558417] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa1a120 is same with the state(6) to be set 00:20:39.873 [2024-12-11 14:57:22.558428] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa1a120 is same with the state(6) to be set 00:20:39.873 [2024-12-11 14:57:22.559838] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc891c0 is same with the state(6) to be set 00:20:39.873 [2024-12-11 14:57:22.559862] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc891c0 is same with the state(6) to be set 00:20:39.873 [2024-12-11 14:57:22.559875] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc891c0 is same with the state(6) to be set 00:20:39.873 [2024-12-11 14:57:22.559887] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc891c0 is same with the state(6) to be set 00:20:39.873 [2024-12-11 14:57:22.559906] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc891c0 is same with the state(6) to be set 00:20:39.873 [2024-12-11 14:57:22.559919] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc891c0 is same with the state(6) to be set 00:20:39.873 [2024-12-11 14:57:22.559930] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc891c0 is same with the state(6) to be set 00:20:39.873 [2024-12-11 14:57:22.559942] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc891c0 is same with the state(6) to be set 00:20:39.873 [2024-12-11 14:57:22.559953] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc891c0 is same with the state(6) to be set 00:20:39.873 [2024-12-11 14:57:22.559965] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc891c0 is same with the state(6) to be set 00:20:39.873 [2024-12-11 14:57:22.559976] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc891c0 is same with the state(6) to be set 00:20:39.873 [2024-12-11 14:57:22.559987] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc891c0 is same with the state(6) to be set 00:20:39.873 [2024-12-11 14:57:22.559999] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc891c0 is same with the state(6) to be set 00:20:39.873 [2024-12-11 14:57:22.560010] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc891c0 is same with the state(6) to be set 00:20:39.873 [2024-12-11 14:57:22.560021] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc891c0 is same with the state(6) to be set 00:20:39.873 [2024-12-11 14:57:22.560033] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc891c0 is same with the state(6) to be set 00:20:39.873 [2024-12-11 14:57:22.560044] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc891c0 is same with the state(6) to be set 00:20:39.873 [2024-12-11 14:57:22.560055] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc891c0 is same with the state(6) to be set 00:20:39.873 [2024-12-11 14:57:22.560066] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc891c0 is same with the state(6) to be set 00:20:39.873 [2024-12-11 14:57:22.560078] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc891c0 is same with the state(6) to be set 00:20:39.874 [2024-12-11 14:57:22.560089] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc891c0 is same with the state(6) to be set 00:20:39.874 [2024-12-11 14:57:22.560100] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc891c0 is same with the state(6) to be set 00:20:39.874 [2024-12-11 14:57:22.560112] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc891c0 is same with the state(6) to be set 00:20:39.874 [2024-12-11 14:57:22.560123] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc891c0 is same with the state(6) to be set 00:20:39.874 [2024-12-11 14:57:22.560134] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc891c0 is same with the state(6) to be set 00:20:39.874 [2024-12-11 14:57:22.560145] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc891c0 is same with the state(6) to be set 00:20:39.874 [2024-12-11 14:57:22.560156] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc891c0 is same with the state(6) to be set 00:20:39.874 [2024-12-11 14:57:22.560168] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc891c0 is same with the state(6) to be set 00:20:39.874 [2024-12-11 14:57:22.560179] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc891c0 is same with the state(6) to be set 00:20:39.874 [2024-12-11 14:57:22.560190] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc891c0 is same with the state(6) to be set 00:20:39.874 [2024-12-11 14:57:22.560201] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc891c0 is same with the state(6) to be set 00:20:39.874 [2024-12-11 14:57:22.560221] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc891c0 is same with the state(6) to be set 00:20:39.874 [2024-12-11 14:57:22.560233] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc891c0 is same with the state(6) to be set 00:20:39.874 [2024-12-11 14:57:22.560245] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc891c0 is same with the state(6) to be set 00:20:39.874 [2024-12-11 14:57:22.560256] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc891c0 is same with the state(6) to be set 00:20:39.874 [2024-12-11 14:57:22.560268] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc891c0 is same with the state(6) to be set 00:20:39.874 [2024-12-11 14:57:22.560279] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc891c0 is same with the state(6) to be set 00:20:39.874 [2024-12-11 14:57:22.560290] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc891c0 is same with the state(6) to be set 00:20:39.874 [2024-12-11 14:57:22.560302] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc891c0 is same with the state(6) to be set 00:20:39.874 [2024-12-11 14:57:22.560313] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc891c0 is same with the state(6) to be set 00:20:39.874 [2024-12-11 14:57:22.560324] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc891c0 is same with the state(6) to be set 00:20:39.874 [2024-12-11 14:57:22.560335] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc891c0 is same with the state(6) to be set 00:20:39.874 [2024-12-11 14:57:22.560347] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc891c0 is same with the state(6) to be set 00:20:39.874 [2024-12-11 14:57:22.560358] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc891c0 is same with the state(6) to be set 00:20:39.874 [2024-12-11 14:57:22.560369] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc891c0 is same with the state(6) to be set 00:20:39.874 [2024-12-11 14:57:22.560380] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc891c0 is same with the state(6) to be set 00:20:39.874 [2024-12-11 14:57:22.560392] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc891c0 is same with the state(6) to be set 00:20:39.874 [2024-12-11 14:57:22.560403] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc891c0 is same with the state(6) to be set 00:20:39.874 [2024-12-11 14:57:22.560414] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc891c0 is same with the state(6) to be set 00:20:39.874 [2024-12-11 14:57:22.560426] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc891c0 is same with the state(6) to be set 00:20:39.874 [2024-12-11 14:57:22.560438] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc891c0 is same with the state(6) to be set 00:20:39.874 [2024-12-11 14:57:22.560450] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc891c0 is same with the state(6) to be set 00:20:39.874 [2024-12-11 14:57:22.560461] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc891c0 is same with the state(6) to be set 00:20:39.874 [2024-12-11 14:57:22.560472] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc891c0 is same with the state(6) to be set 00:20:39.874 [2024-12-11 14:57:22.560484] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc891c0 is same with the state(6) to be set 00:20:39.874 [2024-12-11 14:57:22.560495] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc891c0 is same with the state(6) to be set 00:20:39.874 [2024-12-11 14:57:22.560507] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc891c0 is same with the state(6) to be set 00:20:39.874 [2024-12-11 14:57:22.560518] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc891c0 is same with the state(6) to be set 00:20:39.874 [2024-12-11 14:57:22.560541] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc891c0 is same with the state(6) to be set 00:20:39.874 [2024-12-11 14:57:22.560561] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc891c0 is same with the state(6) to be set 00:20:39.874 [2024-12-11 14:57:22.560573] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc891c0 is same with the state(6) to be set 00:20:39.874 [2024-12-11 14:57:22.560584] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc891c0 is same with the state(6) to be set 00:20:39.874 [2024-12-11 14:57:22.560596] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc891c0 is same with the state(6) to be set 00:20:39.874 [2024-12-11 14:57:22.560607] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc891c0 is same with the state(6) to be set 00:20:39.874 [2024-12-11 14:57:22.562199] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89690 is same with the state(6) to be set 00:20:39.874 [2024-12-11 14:57:22.562233] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89690 is same with the state(6) to be set 00:20:39.874 [2024-12-11 14:57:22.562249] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89690 is same with the state(6) to be set 00:20:39.874 [2024-12-11 14:57:22.562262] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89690 is same with the state(6) to be set 00:20:39.874 [2024-12-11 14:57:22.562274] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89690 is same with the state(6) to be set 00:20:39.874 [2024-12-11 14:57:22.562285] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89690 is same with the state(6) to be set 00:20:39.874 [2024-12-11 14:57:22.562298] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89690 is same with the state(6) to be set 00:20:39.874 [2024-12-11 14:57:22.562310] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89690 is same with the state(6) to be set 00:20:39.874 [2024-12-11 14:57:22.562321] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89690 is same with the state(6) to be set 00:20:39.874 [2024-12-11 14:57:22.562333] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89690 is same with the state(6) to be set 00:20:39.874 [2024-12-11 14:57:22.562345] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89690 is same with the state(6) to be set 00:20:39.874 [2024-12-11 14:57:22.562356] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89690 is same with the state(6) to be set 00:20:39.874 [2024-12-11 14:57:22.562368] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89690 is same with the state(6) to be set 00:20:39.874 [2024-12-11 14:57:22.562380] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89690 is same with the state(6) to be set 00:20:39.874 [2024-12-11 14:57:22.562392] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89690 is same with the state(6) to be set 00:20:39.874 [2024-12-11 14:57:22.562404] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89690 is same with the state(6) to be set 00:20:39.874 [2024-12-11 14:57:22.562415] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89690 is same with the state(6) to be set 00:20:39.874 [2024-12-11 14:57:22.562427] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89690 is same with the state(6) to be set 00:20:39.874 [2024-12-11 14:57:22.562438] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89690 is same with the state(6) to be set 00:20:39.874 [2024-12-11 14:57:22.562450] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89690 is same with the state(6) to be set 00:20:39.874 [2024-12-11 14:57:22.562461] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89690 is same with the state(6) to be set 00:20:39.874 [2024-12-11 14:57:22.562472] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89690 is same with the state(6) to be set 00:20:39.874 [2024-12-11 14:57:22.562490] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89690 is same with the state(6) to be set 00:20:39.874 [2024-12-11 14:57:22.562503] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89690 is same with the state(6) to be set 00:20:39.874 [2024-12-11 14:57:22.562515] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89690 is same with the state(6) to be set 00:20:39.874 [2024-12-11 14:57:22.562536] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89690 is same with the state(6) to be set 00:20:39.874 [2024-12-11 14:57:22.562556] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89690 is same with the state(6) to be set 00:20:39.874 [2024-12-11 14:57:22.562569] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89690 is same with the state(6) to be set 00:20:39.874 [2024-12-11 14:57:22.562581] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89690 is same with the state(6) to be set 00:20:39.874 [2024-12-11 14:57:22.562592] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89690 is same with the state(6) to be set 00:20:39.874 [2024-12-11 14:57:22.562604] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89690 is same with the state(6) to be set 00:20:39.874 [2024-12-11 14:57:22.562616] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89690 is same with the state(6) to be set 00:20:39.874 [2024-12-11 14:57:22.562627] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89690 is same with the state(6) to be set 00:20:39.874 [2024-12-11 14:57:22.562639] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89690 is same with the state(6) to be set 00:20:39.874 [2024-12-11 14:57:22.562650] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89690 is same with the state(6) to be set 00:20:39.874 [2024-12-11 14:57:22.562662] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89690 is same with the state(6) to be set 00:20:39.874 [2024-12-11 14:57:22.562674] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89690 is same with the state(6) to be set 00:20:39.874 [2024-12-11 14:57:22.562686] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89690 is same with the state(6) to be set 00:20:39.874 [2024-12-11 14:57:22.562698] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89690 is same with the state(6) to be set 00:20:39.874 [2024-12-11 14:57:22.562710] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89690 is same with the state(6) to be set 00:20:39.874 [2024-12-11 14:57:22.562721] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89690 is same with the state(6) to be set 00:20:39.874 [2024-12-11 14:57:22.562733] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89690 is same with the state(6) to be set 00:20:39.875 [2024-12-11 14:57:22.562744] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89690 is same with the state(6) to be set 00:20:39.875 [2024-12-11 14:57:22.562756] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89690 is same with the state(6) to be set 00:20:39.875 [2024-12-11 14:57:22.562767] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89690 is same with the state(6) to be set 00:20:39.875 [2024-12-11 14:57:22.562779] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89690 is same with the state(6) to be set 00:20:39.875 [2024-12-11 14:57:22.562791] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89690 is same with the state(6) to be set 00:20:39.875 [2024-12-11 14:57:22.562802] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89690 is same with the state(6) to be set 00:20:39.875 [2024-12-11 14:57:22.562813] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89690 is same with the state(6) to be set 00:20:39.875 [2024-12-11 14:57:22.562829] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89690 is same with the state(6) to be set 00:20:39.875 [2024-12-11 14:57:22.562841] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89690 is same with the state(6) to be set 00:20:39.875 [2024-12-11 14:57:22.562853] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89690 is same with the state(6) to be set 00:20:39.875 [2024-12-11 14:57:22.562867] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89690 is same with the state(6) to be set 00:20:39.875 [2024-12-11 14:57:22.562878] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89690 is same with the state(6) to be set 00:20:39.875 [2024-12-11 14:57:22.562890] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89690 is same with the state(6) to be set 00:20:39.875 [2024-12-11 14:57:22.562901] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89690 is same with the state(6) to be set 00:20:39.875 [2024-12-11 14:57:22.562914] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89690 is same with the state(6) to be set 00:20:39.875 [2024-12-11 14:57:22.562926] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89690 is same with the state(6) to be set 00:20:39.875 [2024-12-11 14:57:22.562938] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89690 is same with the state(6) to be set 00:20:39.875 [2024-12-11 14:57:22.562950] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89690 is same with the state(6) to be set 00:20:39.875 [2024-12-11 14:57:22.562961] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89690 is same with the state(6) to be set 00:20:39.875 [2024-12-11 14:57:22.564123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.875 [2024-12-11 14:57:22.564168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.875 [2024-12-11 14:57:22.564188] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.875 [2024-12-11 14:57:22.564203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.875 [2024-12-11 14:57:22.564217] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.875 [2024-12-11 14:57:22.564231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.875 [2024-12-11 14:57:22.564244] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.875 [2024-12-11 14:57:22.564258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.875 [2024-12-11 14:57:22.564271] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9320 is same with the state(6) to be set 00:20:39.875 [2024-12-11 14:57:22.564331] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.875 [2024-12-11 14:57:22.564351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.875 [2024-12-11 14:57:22.564366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.875 [2024-12-11 14:57:22.564380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.875 [2024-12-11 14:57:22.564394] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.875 [2024-12-11 14:57:22.564413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.875 [2024-12-11 14:57:22.564428] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.875 [2024-12-11 14:57:22.564442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.875 [2024-12-11 14:57:22.564455] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1720c70 is same with the state(6) to be set 00:20:39.875 [2024-12-11 14:57:22.564506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.875 [2024-12-11 14:57:22.564526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.875 [2024-12-11 14:57:22.564541] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.875 [2024-12-11 14:57:22.564564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.875 [2024-12-11 14:57:22.564579] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.875 [2024-12-11 14:57:22.564592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.875 [2024-12-11 14:57:22.564606] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.875 [2024-12-11 14:57:22.564620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.875 [2024-12-11 14:57:22.564633] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bd330 is same with the state(6) to be set 00:20:39.875 [2024-12-11 14:57:22.564737] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.875 [2024-12-11 14:57:22.564759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.875 [2024-12-11 14:57:22.564773] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.875 [2024-12-11 14:57:22.564787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.875 [2024-12-11 14:57:22.564801] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.875 [2024-12-11 14:57:22.564814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.875 [2024-12-11 14:57:22.564827] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.875 [2024-12-11 14:57:22.564840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.875 [2024-12-11 14:57:22.564853] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c8470 is same with the state(6) to be set 00:20:39.875 [2024-12-11 14:57:22.565467] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89b80 is same with the state(6) to be set 00:20:39.875 [2024-12-11 14:57:22.565501] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89b80 is same with the state(6) to be set 00:20:39.875 [2024-12-11 14:57:22.565518] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89b80 is same with the state(6) to be set 00:20:39.875 [2024-12-11 14:57:22.565538] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89b80 is same with the state(6) to be set 00:20:39.875 [2024-12-11 14:57:22.565580] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89b80 is same with the state(6) to be set 00:20:39.875 [2024-12-11 14:57:22.565596] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89b80 is same with the state(6) to be set 00:20:39.875 [2024-12-11 14:57:22.565608] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89b80 is same with the state(6) to be set 00:20:39.875 [2024-12-11 14:57:22.565620] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89b80 is same with the state(6) to be set 00:20:39.875 [2024-12-11 14:57:22.565632] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89b80 is same with the state(6) to be set 00:20:39.875 [2024-12-11 14:57:22.565644] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89b80 is same with the state(6) to be set 00:20:39.875 [2024-12-11 14:57:22.565655] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89b80 is same with the state(6) to be set 00:20:39.875 [2024-12-11 14:57:22.565666] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89b80 is same with the state(6) to be set 00:20:39.875 [2024-12-11 14:57:22.565681] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89b80 is same with the state(6) to be set 00:20:39.875 [2024-12-11 14:57:22.565704] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89b80 is same with the state(6) to be set 00:20:39.875 [2024-12-11 14:57:22.565723] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89b80 is same with the state(6) to be set 00:20:39.875 [2024-12-11 14:57:22.565735] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89b80 is same with the state(6) to be set 00:20:39.875 [2024-12-11 14:57:22.565746] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89b80 is same with the state(6) to be set 00:20:39.875 [2024-12-11 14:57:22.565758] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89b80 is same with the state(6) to be set 00:20:39.875 [2024-12-11 14:57:22.565770] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89b80 is same with the state(6) to be set 00:20:39.875 [2024-12-11 14:57:22.565781] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89b80 is same with the state(6) to be set 00:20:39.875 [2024-12-11 14:57:22.565793] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89b80 is same with the state(6) to be set 00:20:39.875 [2024-12-11 14:57:22.565808] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89b80 is same with the state(6) to be set 00:20:39.875 [2024-12-11 14:57:22.565830] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89b80 is same with the state(6) to be set 00:20:39.875 [2024-12-11 14:57:22.565849] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89b80 is same with the state(6) to be set 00:20:39.875 [2024-12-11 14:57:22.565861] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89b80 is same with the state(6) to be set 00:20:39.875 [2024-12-11 14:57:22.565873] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89b80 is same with the state(6) to be set 00:20:39.875 [2024-12-11 14:57:22.565884] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89b80 is same with the state(6) to be set 00:20:39.875 [2024-12-11 14:57:22.565896] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89b80 is same with the state(6) to be set 00:20:39.875 [2024-12-11 14:57:22.565907] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89b80 is same with the state(6) to be set 00:20:39.876 [2024-12-11 14:57:22.565919] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89b80 is same with the state(6) to be set 00:20:39.876 [2024-12-11 14:57:22.565933] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89b80 is same with the state(6) to be set 00:20:39.876 [2024-12-11 14:57:22.565962] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89b80 is same with the state(6) to be set 00:20:39.876 [2024-12-11 14:57:22.565978] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89b80 is same with the state(6) to be set 00:20:39.876 [2024-12-11 14:57:22.565990] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89b80 is same with the state(6) to be set 00:20:39.876 [2024-12-11 14:57:22.566001] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89b80 is same with the state(6) to be set 00:20:39.876 [2024-12-11 14:57:22.566012] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89b80 is same with the state(6) to be set 00:20:39.876 [2024-12-11 14:57:22.566023] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89b80 is same with the state(6) to be set 00:20:39.876 [2024-12-11 14:57:22.566034] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89b80 is same with the state(6) to be set 00:20:39.876 [2024-12-11 14:57:22.566045] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89b80 is same with the state(6) to be set 00:20:39.876 [2024-12-11 14:57:22.566058] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89b80 is same with the state(6) to be set 00:20:39.876 [2024-12-11 14:57:22.566079] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89b80 is same with the state(6) to be set 00:20:39.876 [2024-12-11 14:57:22.566096] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89b80 is same with the state(6) to be set 00:20:39.876 [2024-12-11 14:57:22.566108] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89b80 is same with the state(6) to be set 00:20:39.876 [2024-12-11 14:57:22.566121] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89b80 is same with the state(6) to be set 00:20:39.876 [2024-12-11 14:57:22.566133] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89b80 is same with the state(6) to be set 00:20:39.876 [2024-12-11 14:57:22.566144] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89b80 is same with the state(6) to be set 00:20:39.876 [2024-12-11 14:57:22.566156] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89b80 is same with the state(6) to be set 00:20:39.876 [2024-12-11 14:57:22.566168] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89b80 is same with the state(6) to be set 00:20:39.876 [2024-12-11 14:57:22.566179] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89b80 is same with the state(6) to be set 00:20:39.876 [2024-12-11 14:57:22.566198] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89b80 is same with the state(6) to be set 00:20:39.876 [2024-12-11 14:57:22.566215] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89b80 is same with the state(6) to be set 00:20:39.876 [2024-12-11 14:57:22.566227] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89b80 is same with the state(6) to be set 00:20:39.876 [2024-12-11 14:57:22.566239] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89b80 is same with the state(6) to be set 00:20:39.876 [2024-12-11 14:57:22.566250] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89b80 is same with the state(6) to be set 00:20:39.876 [2024-12-11 14:57:22.566262] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89b80 is same with the state(6) to be set 00:20:39.876 [2024-12-11 14:57:22.566273] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89b80 is same with the state(6) to be set 00:20:39.876 [2024-12-11 14:57:22.566285] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89b80 is same with the state(6) to be set 00:20:39.876 [2024-12-11 14:57:22.566297] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89b80 is same with the state(6) to be set 00:20:39.876 [2024-12-11 14:57:22.566308] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89b80 is same with the state(6) to be set 00:20:39.876 [2024-12-11 14:57:22.566339] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89b80 is same with the state(6) to be set 00:20:39.876 [2024-12-11 14:57:22.566355] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89b80 is same with the state(6) to be set 00:20:39.876 [2024-12-11 14:57:22.566367] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89b80 is same with the state(6) to be set 00:20:39.876 [2024-12-11 14:57:22.566378] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc89b80 is same with the state(6) to be set 00:20:39.876 [2024-12-11 14:57:22.567427] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a050 is same with the state(6) to be set 00:20:39.876 [2024-12-11 14:57:22.567460] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a050 is same with the state(6) to be set 00:20:39.876 [2024-12-11 14:57:22.567475] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a050 is same with the state(6) to be set 00:20:39.876 [2024-12-11 14:57:22.567490] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a050 is same with the state(6) to be set 00:20:39.876 [2024-12-11 14:57:22.567503] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a050 is same with the state(6) to be set 00:20:39.876 [2024-12-11 14:57:22.567515] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a050 is same with the state(6) to be set 00:20:39.876 [2024-12-11 14:57:22.567526] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a050 is same with the state(6) to be set 00:20:39.876 [2024-12-11 14:57:22.567541] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a050 is same with the state(6) to be set 00:20:39.876 [2024-12-11 14:57:22.567562] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a050 is same with the state(6) to be set 00:20:39.876 [2024-12-11 14:57:22.567574] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a050 is same with the state(6) to be set 00:20:39.876 [2024-12-11 14:57:22.567586] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a050 is same with the state(6) to be set 00:20:39.876 [2024-12-11 14:57:22.567597] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a050 is same with the state(6) to be set 00:20:39.876 [2024-12-11 14:57:22.567609] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a050 is same with the state(6) to be set 00:20:39.876 [2024-12-11 14:57:22.567620] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a050 is same with the state(6) to be set 00:20:39.876 [2024-12-11 14:57:22.567631] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a050 is same with the state(6) to be set 00:20:39.876 [2024-12-11 14:57:22.567648] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a050 is same with the state(6) to be set 00:20:39.876 [2024-12-11 14:57:22.567662] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a050 is same with the state(6) to be set 00:20:39.876 [2024-12-11 14:57:22.567674] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a050 is same with the state(6) to be set 00:20:39.876 [2024-12-11 14:57:22.567686] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a050 is same with the state(6) to be set 00:20:39.876 [2024-12-11 14:57:22.567697] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a050 is same with the state(6) to be set 00:20:39.876 [2024-12-11 14:57:22.567709] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a050 is same with the state(6) to be set 00:20:39.876 [2024-12-11 14:57:22.567720] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a050 is same with the state(6) to be set 00:20:39.876 [2024-12-11 14:57:22.567731] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a050 is same with the state(6) to be set 00:20:39.876 [2024-12-11 14:57:22.567749] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a050 is same with the state(6) to be set 00:20:39.876 [2024-12-11 14:57:22.567762] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a050 is same with the state(6) to be set 00:20:39.876 [2024-12-11 14:57:22.567774] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a050 is same with the state(6) to be set 00:20:39.876 [2024-12-11 14:57:22.567786] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a050 is same with the state(6) to be set 00:20:39.876 [2024-12-11 14:57:22.567798] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a050 is same with the state(6) to be set 00:20:39.876 [2024-12-11 14:57:22.567809] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a050 is same with the state(6) to be set 00:20:39.876 [2024-12-11 14:57:22.567821] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a050 is same with the state(6) to be set 00:20:39.876 [2024-12-11 14:57:22.567832] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a050 is same with the state(6) to be set 00:20:39.876 [2024-12-11 14:57:22.567844] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a050 is same with the state(6) to be set 00:20:39.876 [2024-12-11 14:57:22.567855] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a050 is same with the state(6) to be set 00:20:39.876 [2024-12-11 14:57:22.567867] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a050 is same with the state(6) to be set 00:20:39.876 [2024-12-11 14:57:22.567878] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a050 is same with the state(6) to be set 00:20:39.876 [2024-12-11 14:57:22.567890] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a050 is same with the state(6) to be set 00:20:39.876 [2024-12-11 14:57:22.567907] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a050 is same with the state(6) to be set 00:20:39.876 [2024-12-11 14:57:22.567918] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a050 is same with the state(6) to be set 00:20:39.877 [2024-12-11 14:57:22.567929] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a050 is same with the state(6) to be set 00:20:39.877 [2024-12-11 14:57:22.567940] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a050 is same with the state(6) to be set 00:20:39.877 [2024-12-11 14:57:22.567952] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a050 is same with the state(6) to be set 00:20:39.877 [2024-12-11 14:57:22.567963] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a050 is same with the state(6) to be set 00:20:39.877 [2024-12-11 14:57:22.567975] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a050 is same with the state(6) to be set 00:20:39.877 [2024-12-11 14:57:22.567987] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a050 is same with the state(6) to be set 00:20:39.877 [2024-12-11 14:57:22.567998] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a050 is same with the state(6) to be set 00:20:39.877 [2024-12-11 14:57:22.568010] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a050 is same with the state(6) to be set 00:20:39.877 [2024-12-11 14:57:22.568022] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a050 is same with the state(6) to be set 00:20:39.877 [2024-12-11 14:57:22.568033] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a050 is same with the state(6) to be set 00:20:39.877 [2024-12-11 14:57:22.568045] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a050 is same with the state(6) to be set 00:20:39.877 [2024-12-11 14:57:22.568057] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a050 is same with the state(6) to be set 00:20:39.877 [2024-12-11 14:57:22.568072] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a050 is same with the state(6) to be set 00:20:39.877 [2024-12-11 14:57:22.568084] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a050 is same with the state(6) to be set 00:20:39.877 [2024-12-11 14:57:22.568095] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a050 is same with the state(6) to be set 00:20:39.877 [2024-12-11 14:57:22.568107] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a050 is same with the state(6) to be set 00:20:39.877 [2024-12-11 14:57:22.568119] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a050 is same with the state(6) to be set 00:20:39.877 [2024-12-11 14:57:22.568130] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a050 is same with the state(6) to be set 00:20:39.877 [2024-12-11 14:57:22.568142] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a050 is same with the state(6) to be set 00:20:39.877 [2024-12-11 14:57:22.568154] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a050 is same with the state(6) to be set 00:20:39.877 [2024-12-11 14:57:22.568165] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a050 is same with the state(6) to be set 00:20:39.877 [2024-12-11 14:57:22.568177] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a050 is same with the state(6) to be set 00:20:39.877 [2024-12-11 14:57:22.568188] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a050 is same with the state(6) to be set 00:20:39.877 [2024-12-11 14:57:22.568199] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a050 is same with the state(6) to be set 00:20:39.877 [2024-12-11 14:57:22.568211] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a050 is same with the state(6) to be set 00:20:39.877 [2024-12-11 14:57:22.568222] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a050 is same with the state(6) to be set 00:20:39.877 [2024-12-11 14:57:22.569403] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a3d0 is same with the state(6) to be set 00:20:39.877 [2024-12-11 14:57:22.569433] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a3d0 is same with the state(6) to be set 00:20:39.877 [2024-12-11 14:57:22.569448] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a3d0 is same with the state(6) to be set 00:20:39.877 [2024-12-11 14:57:22.569460] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a3d0 is same with the state(6) to be set 00:20:39.877 [2024-12-11 14:57:22.569472] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a3d0 is same with the state(6) to be set 00:20:39.877 [2024-12-11 14:57:22.569484] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a3d0 is same with the state(6) to be set 00:20:39.877 [2024-12-11 14:57:22.569496] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a3d0 is same with the state(6) to be set 00:20:39.877 [2024-12-11 14:57:22.569508] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a3d0 is same with the state(6) to be set 00:20:39.877 [2024-12-11 14:57:22.569519] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a3d0 is same with the state(6) to be set 00:20:39.877 [2024-12-11 14:57:22.569531] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a3d0 is same with the state(6) to be set 00:20:39.877 [2024-12-11 14:57:22.569542] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a3d0 is same with the state(6) to be set 00:20:39.877 [2024-12-11 14:57:22.569563] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a3d0 is same with the state(6) to be set 00:20:39.877 [2024-12-11 14:57:22.569575] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a3d0 is same with the state(6) to be set 00:20:39.877 [2024-12-11 14:57:22.569592] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a3d0 is same with the state(6) to be set 00:20:39.877 [2024-12-11 14:57:22.569605] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a3d0 is same with the state(6) to be set 00:20:39.877 [2024-12-11 14:57:22.569617] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a3d0 is same with the state(6) to be set 00:20:39.877 [2024-12-11 14:57:22.569628] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a3d0 is same with the state(6) to be set 00:20:39.877 [2024-12-11 14:57:22.569640] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a3d0 is same with the state(6) to be set 00:20:39.877 [2024-12-11 14:57:22.569652] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a3d0 is same with the state(6) to be set 00:20:39.877 [2024-12-11 14:57:22.569664] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a3d0 is same with the state(6) to be set 00:20:39.877 [2024-12-11 14:57:22.569675] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a3d0 is same with the state(6) to be set 00:20:39.877 [2024-12-11 14:57:22.569686] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a3d0 is same with the state(6) to be set 00:20:39.877 [2024-12-11 14:57:22.569698] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a3d0 is same with the state(6) to be set 00:20:39.877 [2024-12-11 14:57:22.569709] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a3d0 is same with the state(6) to be set 00:20:39.877 [2024-12-11 14:57:22.569720] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a3d0 is same with the state(6) to be set 00:20:39.877 [2024-12-11 14:57:22.569731] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a3d0 is same with the state(6) to be set 00:20:39.877 [2024-12-11 14:57:22.569742] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a3d0 is same with the state(6) to be set 00:20:39.877 [2024-12-11 14:57:22.569753] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a3d0 is same with the state(6) to be set 00:20:39.877 [2024-12-11 14:57:22.569765] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a3d0 is same with the state(6) to be set 00:20:39.877 [2024-12-11 14:57:22.569776] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a3d0 is same with the state(6) to be set 00:20:39.877 [2024-12-11 14:57:22.569788] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a3d0 is same with the state(6) to be set 00:20:39.877 [2024-12-11 14:57:22.569799] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a3d0 is same with the state(6) to be set 00:20:39.877 [2024-12-11 14:57:22.569810] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a3d0 is same with the state(6) to be set 00:20:39.877 [2024-12-11 14:57:22.569822] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a3d0 is same with the state(6) to be set 00:20:39.877 [2024-12-11 14:57:22.569833] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a3d0 is same with the state(6) to be set 00:20:39.877 [2024-12-11 14:57:22.569844] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a3d0 is same with the state(6) to be set 00:20:39.877 [2024-12-11 14:57:22.569855] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a3d0 is same with the state(6) to be set 00:20:39.877 [2024-12-11 14:57:22.569867] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a3d0 is same with the state(6) to be set 00:20:39.877 [2024-12-11 14:57:22.569878] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a3d0 is same with the state(6) to be set 00:20:39.877 [2024-12-11 14:57:22.569889] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a3d0 is same with the state(6) to be set 00:20:39.877 [2024-12-11 14:57:22.569900] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a3d0 is same with the state(6) to be set 00:20:39.877 [2024-12-11 14:57:22.569916] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a3d0 is same with the state(6) to be set 00:20:39.877 [2024-12-11 14:57:22.569928] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a3d0 is same with the state(6) to be set 00:20:39.877 [2024-12-11 14:57:22.569939] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a3d0 is same with the state(6) to be set 00:20:39.877 [2024-12-11 14:57:22.569951] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a3d0 is same with the state(6) to be set 00:20:39.877 [2024-12-11 14:57:22.569963] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a3d0 is same with the state(6) to be set 00:20:39.877 [2024-12-11 14:57:22.569974] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a3d0 is same with the state(6) to be set 00:20:39.877 [2024-12-11 14:57:22.569985] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a3d0 is same with the state(6) to be set 00:20:39.877 [2024-12-11 14:57:22.569996] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a3d0 is same with the state(6) to be set 00:20:39.877 [2024-12-11 14:57:22.570008] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a3d0 is same with the state(6) to be set 00:20:39.877 [2024-12-11 14:57:22.570019] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a3d0 is same with the state(6) to be set 00:20:39.877 [2024-12-11 14:57:22.570031] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a3d0 is same with the state(6) to be set 00:20:39.877 [2024-12-11 14:57:22.570042] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a3d0 is same with the state(6) to be set 00:20:39.877 [2024-12-11 14:57:22.570053] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a3d0 is same with the state(6) to be set 00:20:39.877 [2024-12-11 14:57:22.570065] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a3d0 is same with the state(6) to be set 00:20:39.877 [2024-12-11 14:57:22.570076] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a3d0 is same with the state(6) to be set 00:20:39.877 [2024-12-11 14:57:22.570088] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a3d0 is same with the state(6) to be set 00:20:39.877 [2024-12-11 14:57:22.570100] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a3d0 is same with the state(6) to be set 00:20:39.877 [2024-12-11 14:57:22.570112] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a3d0 is same with the state(6) to be set 00:20:39.877 [2024-12-11 14:57:22.570123] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a3d0 is same with the state(6) to be set 00:20:39.877 [2024-12-11 14:57:22.570134] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a3d0 is same with the state(6) to be set 00:20:39.878 [2024-12-11 14:57:22.570146] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a3d0 is same with the state(6) to be set 00:20:39.878 [2024-12-11 14:57:22.570157] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc8a3d0 is same with the state(6) to be set 00:20:39.878 [2024-12-11 14:57:22.570479] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:39.878 [2024-12-11 14:57:22.570576] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:39.878 [2024-12-11 14:57:22.570644] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:39.878 [2024-12-11 14:57:22.570707] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:39.878 [2024-12-11 14:57:22.571803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.878 [2024-12-11 14:57:22.571831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.878 [2024-12-11 14:57:22.571877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.878 [2024-12-11 14:57:22.571894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.878 [2024-12-11 14:57:22.571912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.878 [2024-12-11 14:57:22.571927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.878 [2024-12-11 14:57:22.571943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.878 [2024-12-11 14:57:22.571957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.878 [2024-12-11 14:57:22.571974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.878 [2024-12-11 14:57:22.571988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.878 [2024-12-11 14:57:22.572004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.878 [2024-12-11 14:57:22.572018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.878 [2024-12-11 14:57:22.572034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.878 [2024-12-11 14:57:22.572048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.878 [2024-12-11 14:57:22.572064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.878 [2024-12-11 14:57:22.572078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.878 [2024-12-11 14:57:22.572094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.878 [2024-12-11 14:57:22.572107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.878 [2024-12-11 14:57:22.572123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.878 [2024-12-11 14:57:22.572136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.878 [2024-12-11 14:57:22.572152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.878 [2024-12-11 14:57:22.572166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.878 [2024-12-11 14:57:22.572181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.878 [2024-12-11 14:57:22.572195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.878 [2024-12-11 14:57:22.572211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.878 [2024-12-11 14:57:22.572234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.878 [2024-12-11 14:57:22.572256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.878 [2024-12-11 14:57:22.572276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.878 [2024-12-11 14:57:22.572293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.878 [2024-12-11 14:57:22.572307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.878 [2024-12-11 14:57:22.572323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.878 [2024-12-11 14:57:22.572337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.878 [2024-12-11 14:57:22.572353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.878 [2024-12-11 14:57:22.572367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.878 [2024-12-11 14:57:22.572383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.878 [2024-12-11 14:57:22.572393] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa198d0 is same with t[2024-12-11 14:57:22.572404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:20:39.878 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.878 [2024-12-11 14:57:22.572422] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa198d0 is same with the state(6) to be set 00:20:39.878 [2024-12-11 14:57:22.572424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.878 [2024-12-11 14:57:22.572437] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa198d0 is same with the state(6) to be set 00:20:39.878 [2024-12-11 14:57:22.572440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.878 [2024-12-11 14:57:22.572449] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa198d0 is same with the state(6) to be set 00:20:39.878 [2024-12-11 14:57:22.572456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.878 [2024-12-11 14:57:22.572461] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa198d0 is same with the state(6) to be set 00:20:39.878 [2024-12-11 14:57:22.572470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.878 [2024-12-11 14:57:22.572473] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa198d0 is same with the state(6) to be set 00:20:39.878 [2024-12-11 14:57:22.572486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.878 [2024-12-11 14:57:22.572500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.878 [2024-12-11 14:57:22.572517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.878 [2024-12-11 14:57:22.572538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.878 [2024-12-11 14:57:22.572565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.878 [2024-12-11 14:57:22.572581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.878 [2024-12-11 14:57:22.572596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.878 [2024-12-11 14:57:22.572615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.878 [2024-12-11 14:57:22.572632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.878 [2024-12-11 14:57:22.572646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.878 [2024-12-11 14:57:22.572662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.878 [2024-12-11 14:57:22.572676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.878 [2024-12-11 14:57:22.572691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.878 [2024-12-11 14:57:22.572705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.878 [2024-12-11 14:57:22.572721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.878 [2024-12-11 14:57:22.572735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.878 [2024-12-11 14:57:22.572751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.878 [2024-12-11 14:57:22.572765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.878 [2024-12-11 14:57:22.572780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.878 [2024-12-11 14:57:22.572794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.878 [2024-12-11 14:57:22.572809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.878 [2024-12-11 14:57:22.572806] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa19c50 is same with the state(6) to be set 00:20:39.878 [2024-12-11 14:57:22.572824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.878 [2024-12-11 14:57:22.572840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.878 [2024-12-11 14:57:22.572842] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa19c50 is same with the state(6) to be set 00:20:39.878 [2024-12-11 14:57:22.572855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-11 14:57:22.572856] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa19c50 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.878 he state(6) to be set 00:20:39.878 [2024-12-11 14:57:22.572870] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa19c50 is same with the state(6) to be set 00:20:39.878 [2024-12-11 14:57:22.572873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.878 [2024-12-11 14:57:22.572882] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa19c50 is same with the state(6) to be set 00:20:39.878 [2024-12-11 14:57:22.572888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.879 [2024-12-11 14:57:22.572894] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa19c50 is same with the state(6) to be set 00:20:39.879 [2024-12-11 14:57:22.572903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.879 [2024-12-11 14:57:22.572913] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa19c50 is same with the state(6) to be set 00:20:39.879 [2024-12-11 14:57:22.572918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.879 [2024-12-11 14:57:22.572926] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa19c50 is same with the state(6) to be set 00:20:39.879 [2024-12-11 14:57:22.572934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.879 [2024-12-11 14:57:22.572938] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa19c50 is same with the state(6) to be set 00:20:39.879 [2024-12-11 14:57:22.572949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-11 14:57:22.572950] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa19c50 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.879 he state(6) to be set 00:20:39.879 [2024-12-11 14:57:22.572964] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa19c50 is same with the state(6) to be set 00:20:39.879 [2024-12-11 14:57:22.572967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.879 [2024-12-11 14:57:22.572976] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa19c50 is same with the state(6) to be set 00:20:39.879 [2024-12-11 14:57:22.572982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.879 [2024-12-11 14:57:22.572988] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa19c50 is same with the state(6) to be set 00:20:39.879 [2024-12-11 14:57:22.572998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:12[2024-12-11 14:57:22.573000] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa19c50 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.879 he state(6) to be set 00:20:39.879 [2024-12-11 14:57:22.573014] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa19c50 is same with t[2024-12-11 14:57:22.573014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:20:39.879 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.879 [2024-12-11 14:57:22.573027] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa19c50 is same with the state(6) to be set 00:20:39.879 [2024-12-11 14:57:22.573032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.879 [2024-12-11 14:57:22.573039] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa19c50 is same with the state(6) to be set 00:20:39.879 [2024-12-11 14:57:22.573046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.879 [2024-12-11 14:57:22.573052] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa19c50 is same with the state(6) to be set 00:20:39.879 [2024-12-11 14:57:22.573064] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa19c50 is same with t[2024-12-11 14:57:22.573063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:12he state(6) to be set 00:20:39.879 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.879 [2024-12-11 14:57:22.573078] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa19c50 is same with the state(6) to be set 00:20:39.879 [2024-12-11 14:57:22.573080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.879 [2024-12-11 14:57:22.573089] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa19c50 is same with the state(6) to be set 00:20:39.879 [2024-12-11 14:57:22.573096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.879 [2024-12-11 14:57:22.573105] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa19c50 is same with the state(6) to be set 00:20:39.879 [2024-12-11 14:57:22.573111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.879 [2024-12-11 14:57:22.573118] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa19c50 is same with the state(6) to be set 00:20:39.879 [2024-12-11 14:57:22.573127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:12[2024-12-11 14:57:22.573130] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa19c50 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.879 he state(6) to be set 00:20:39.879 [2024-12-11 14:57:22.573143] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa19c50 is same with t[2024-12-11 14:57:22.573142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:20:39.879 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.879 [2024-12-11 14:57:22.573157] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa19c50 is same with the state(6) to be set 00:20:39.879 [2024-12-11 14:57:22.573161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.879 [2024-12-11 14:57:22.573169] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa19c50 is same with the state(6) to be set 00:20:39.879 [2024-12-11 14:57:22.573176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.879 [2024-12-11 14:57:22.573181] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa19c50 is same with the state(6) to be set 00:20:39.879 [2024-12-11 14:57:22.573192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:12[2024-12-11 14:57:22.573193] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa19c50 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.879 he state(6) to be set 00:20:39.879 [2024-12-11 14:57:22.573207] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa19c50 is same with t[2024-12-11 14:57:22.573208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:20:39.879 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.879 [2024-12-11 14:57:22.573220] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa19c50 is same with the state(6) to be set 00:20:39.879 [2024-12-11 14:57:22.573225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.879 [2024-12-11 14:57:22.573232] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa19c50 is same with the state(6) to be set 00:20:39.879 [2024-12-11 14:57:22.573240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.879 [2024-12-11 14:57:22.573244] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa19c50 is same with the state(6) to be set 00:20:39.879 [2024-12-11 14:57:22.573257] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa19c50 is same with t[2024-12-11 14:57:22.573256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:12he state(6) to be set 00:20:39.879 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.879 [2024-12-11 14:57:22.573272] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa19c50 is same with the state(6) to be set 00:20:39.879 [2024-12-11 14:57:22.573274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.879 [2024-12-11 14:57:22.573284] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa19c50 is same with the state(6) to be set 00:20:39.879 [2024-12-11 14:57:22.573294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:12[2024-12-11 14:57:22.573297] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa19c50 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.879 he state(6) to be set 00:20:39.879 [2024-12-11 14:57:22.573311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-11 14:57:22.573311] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa19c50 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.879 he state(6) to be set 00:20:39.879 [2024-12-11 14:57:22.573326] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa19c50 is same with the state(6) to be set 00:20:39.879 [2024-12-11 14:57:22.573329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.879 [2024-12-11 14:57:22.573338] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa19c50 is same with the state(6) to be set 00:20:39.879 [2024-12-11 14:57:22.573344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.879 [2024-12-11 14:57:22.573350] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa19c50 is same with the state(6) to be set 00:20:39.879 [2024-12-11 14:57:22.573360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:12[2024-12-11 14:57:22.573362] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa19c50 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.879 he state(6) to be set 00:20:39.879 [2024-12-11 14:57:22.573376] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa19c50 is same with t[2024-12-11 14:57:22.573376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:20:39.879 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.879 [2024-12-11 14:57:22.573390] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa19c50 is same with the state(6) to be set 00:20:39.879 [2024-12-11 14:57:22.573395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.879 [2024-12-11 14:57:22.573402] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa19c50 is same with the state(6) to be set 00:20:39.879 [2024-12-11 14:57:22.573410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.879 [2024-12-11 14:57:22.573415] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa19c50 is same with the state(6) to be set 00:20:39.879 [2024-12-11 14:57:22.573427] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa19c50 is same with t[2024-12-11 14:57:22.573426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:12he state(6) to be set 00:20:39.879 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.879 [2024-12-11 14:57:22.573441] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa19c50 is same with the state(6) to be set 00:20:39.879 [2024-12-11 14:57:22.573443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.879 [2024-12-11 14:57:22.573453] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa19c50 is same with the state(6) to be set 00:20:39.879 [2024-12-11 14:57:22.573460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.879 [2024-12-11 14:57:22.573465] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa19c50 is same with the state(6) to be set 00:20:39.879 [2024-12-11 14:57:22.573475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.879 [2024-12-11 14:57:22.573480] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa19c50 is same with the state(6) to be set 00:20:39.879 [2024-12-11 14:57:22.573492] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa19c50 is same with t[2024-12-11 14:57:22.573491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:12he state(6) to be set 00:20:39.879 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.880 [2024-12-11 14:57:22.573507] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa19c50 is same with the state(6) to be set 00:20:39.880 [2024-12-11 14:57:22.573508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.880 [2024-12-11 14:57:22.573518] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa19c50 is same with the state(6) to be set 00:20:39.880 [2024-12-11 14:57:22.573525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.880 [2024-12-11 14:57:22.573541] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa19c50 is same with t[2024-12-11 14:57:22.573542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:20:39.880 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.880 [2024-12-11 14:57:22.573564] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa19c50 is same with the state(6) to be set 00:20:39.880 [2024-12-11 14:57:22.573569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.880 [2024-12-11 14:57:22.573577] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa19c50 is same with the state(6) to be set 00:20:39.880 [2024-12-11 14:57:22.573584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.880 [2024-12-11 14:57:22.573589] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa19c50 is same with the state(6) to be set 00:20:39.880 [2024-12-11 14:57:22.573601] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa19c50 is same with t[2024-12-11 14:57:22.573600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:12he state(6) to be set 00:20:39.880 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.880 [2024-12-11 14:57:22.573615] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa19c50 is same with the state(6) to be set 00:20:39.880 [2024-12-11 14:57:22.573618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.880 [2024-12-11 14:57:22.573627] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa19c50 is same with the state(6) to be set 00:20:39.880 [2024-12-11 14:57:22.573634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.880 [2024-12-11 14:57:22.573638] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa19c50 is same with the state(6) to be set 00:20:39.880 [2024-12-11 14:57:22.573649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-11 14:57:22.573650] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa19c50 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.880 he state(6) to be set 00:20:39.880 [2024-12-11 14:57:22.573666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.880 [2024-12-11 14:57:22.573681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.880 [2024-12-11 14:57:22.573701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.880 [2024-12-11 14:57:22.573716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.880 [2024-12-11 14:57:22.573732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.880 [2024-12-11 14:57:22.573746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.880 [2024-12-11 14:57:22.573763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.880 [2024-12-11 14:57:22.573777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.880 [2024-12-11 14:57:22.573792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.880 [2024-12-11 14:57:22.573807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.880 [2024-12-11 14:57:22.573822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.880 [2024-12-11 14:57:22.573836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.880 [2024-12-11 14:57:22.573851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.880 [2024-12-11 14:57:22.573866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.880 [2024-12-11 14:57:22.573881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.880 [2024-12-11 14:57:22.573895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.880 [2024-12-11 14:57:22.573909] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bc130 is same with the state(6) to be set 00:20:39.880 [2024-12-11 14:57:22.574254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.880 [2024-12-11 14:57:22.574278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.880 [2024-12-11 14:57:22.574300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.880 [2024-12-11 14:57:22.574317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.880 [2024-12-11 14:57:22.574333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.880 [2024-12-11 14:57:22.574349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.880 [2024-12-11 14:57:22.574365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.880 [2024-12-11 14:57:22.574379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.880 [2024-12-11 14:57:22.574395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.880 [2024-12-11 14:57:22.574409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.880 [2024-12-11 14:57:22.574434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.880 [2024-12-11 14:57:22.574450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.880 [2024-12-11 14:57:22.574467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.880 [2024-12-11 14:57:22.574481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.880 [2024-12-11 14:57:22.574498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.880 [2024-12-11 14:57:22.574513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.880 [2024-12-11 14:57:22.574529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.880 [2024-12-11 14:57:22.574566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.880 [2024-12-11 14:57:22.574584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.880 [2024-12-11 14:57:22.574598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.880 [2024-12-11 14:57:22.574614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.880 [2024-12-11 14:57:22.574629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.880 [2024-12-11 14:57:22.574645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.880 [2024-12-11 14:57:22.574660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.880 [2024-12-11 14:57:22.574676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.880 [2024-12-11 14:57:22.574690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.880 [2024-12-11 14:57:22.574706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.880 [2024-12-11 14:57:22.574721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.880 [2024-12-11 14:57:22.574737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.880 [2024-12-11 14:57:22.574751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.880 [2024-12-11 14:57:22.574767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.880 [2024-12-11 14:57:22.574782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.880 [2024-12-11 14:57:22.574798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.880 [2024-12-11 14:57:22.574812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.880 [2024-12-11 14:57:22.574827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.881 [2024-12-11 14:57:22.574852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.881 [2024-12-11 14:57:22.574869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.881 [2024-12-11 14:57:22.574884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.881 [2024-12-11 14:57:22.574900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.881 [2024-12-11 14:57:22.574914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.881 [2024-12-11 14:57:22.574930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.881 [2024-12-11 14:57:22.574944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.881 [2024-12-11 14:57:22.574961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.881 [2024-12-11 14:57:22.574976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.881 [2024-12-11 14:57:22.574992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.881 [2024-12-11 14:57:22.575006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.881 [2024-12-11 14:57:22.575023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.881 [2024-12-11 14:57:22.575037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.881 [2024-12-11 14:57:22.575053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.881 [2024-12-11 14:57:22.575067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.881 [2024-12-11 14:57:22.575082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.881 [2024-12-11 14:57:22.575097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.881 [2024-12-11 14:57:22.575113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.881 [2024-12-11 14:57:22.575127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.881 [2024-12-11 14:57:22.575142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.881 [2024-12-11 14:57:22.575156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.881 [2024-12-11 14:57:22.575172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.881 [2024-12-11 14:57:22.575186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.881 [2024-12-11 14:57:22.575202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.881 [2024-12-11 14:57:22.575216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.881 [2024-12-11 14:57:22.575236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.881 [2024-12-11 14:57:22.575251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.881 [2024-12-11 14:57:22.575267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.881 [2024-12-11 14:57:22.575281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.881 [2024-12-11 14:57:22.575297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.881 [2024-12-11 14:57:22.575311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.881 [2024-12-11 14:57:22.575326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.881 [2024-12-11 14:57:22.575340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.881 [2024-12-11 14:57:22.575356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.881 [2024-12-11 14:57:22.575371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.881 [2024-12-11 14:57:22.575387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.881 [2024-12-11 14:57:22.575401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.881 [2024-12-11 14:57:22.575417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.881 [2024-12-11 14:57:22.575431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.881 [2024-12-11 14:57:22.575446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.881 [2024-12-11 14:57:22.575460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.881 [2024-12-11 14:57:22.575475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.881 [2024-12-11 14:57:22.575489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.881 [2024-12-11 14:57:22.575505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.881 [2024-12-11 14:57:22.575519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.881 [2024-12-11 14:57:22.575540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.881 [2024-12-11 14:57:22.575563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.881 [2024-12-11 14:57:22.575579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.881 [2024-12-11 14:57:22.575594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.881 [2024-12-11 14:57:22.575609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.881 [2024-12-11 14:57:22.575627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.881 [2024-12-11 14:57:22.575644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.881 [2024-12-11 14:57:22.575658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.881 [2024-12-11 14:57:22.575673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.881 [2024-12-11 14:57:22.575687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.881 [2024-12-11 14:57:22.575702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.881 [2024-12-11 14:57:22.575716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.881 [2024-12-11 14:57:22.575732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.881 [2024-12-11 14:57:22.575746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.881 [2024-12-11 14:57:22.575761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.881 [2024-12-11 14:57:22.575775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.881 [2024-12-11 14:57:22.575791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.881 [2024-12-11 14:57:22.575805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.881 [2024-12-11 14:57:22.575820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.881 [2024-12-11 14:57:22.575844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.881 [2024-12-11 14:57:22.575859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.881 [2024-12-11 14:57:22.575873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.881 [2024-12-11 14:57:22.575889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.881 [2024-12-11 14:57:22.575903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.881 [2024-12-11 14:57:22.575918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.881 [2024-12-11 14:57:22.575932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.881 [2024-12-11 14:57:22.575947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.881 [2024-12-11 14:57:22.575961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.881 [2024-12-11 14:57:22.575977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.881 [2024-12-11 14:57:22.575990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.881 [2024-12-11 14:57:22.576010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.881 [2024-12-11 14:57:22.576025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.881 [2024-12-11 14:57:22.576040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.881 [2024-12-11 14:57:22.576054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.881 [2024-12-11 14:57:22.576069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.881 [2024-12-11 14:57:22.576084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.882 [2024-12-11 14:57:22.576099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.882 [2024-12-11 14:57:22.576112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.882 [2024-12-11 14:57:22.576127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.882 [2024-12-11 14:57:22.576141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.882 [2024-12-11 14:57:22.576156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.882 [2024-12-11 14:57:22.576170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.882 [2024-12-11 14:57:22.576185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.882 [2024-12-11 14:57:22.576199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.882 [2024-12-11 14:57:22.576214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.882 [2024-12-11 14:57:22.576229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.882 [2024-12-11 14:57:22.576244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.882 [2024-12-11 14:57:22.576258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.882 [2024-12-11 14:57:22.578015] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9320 (9): Bad file descriptor 00:20:39.882 [2024-12-11 14:57:22.578058] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1720c70 (9): Bad file descriptor 00:20:39.882 [2024-12-11 14:57:22.578095] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12bd330 (9): Bad file descriptor 00:20:39.882 [2024-12-11 14:57:22.578148] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.882 [2024-12-11 14:57:22.578169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.882 [2024-12-11 14:57:22.578184] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.882 [2024-12-11 14:57:22.578198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.882 [2024-12-11 14:57:22.578212] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.882 [2024-12-11 14:57:22.578231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.882 [2024-12-11 14:57:22.578246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.882 [2024-12-11 14:57:22.578260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.882 [2024-12-11 14:57:22.578273] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e4190 is same with the state(6) to be set 00:20:39.882 [2024-12-11 14:57:22.578323] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.882 [2024-12-11 14:57:22.578343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.882 [2024-12-11 14:57:22.578357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.882 [2024-12-11 14:57:22.578371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.882 [2024-12-11 14:57:22.578384] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.882 [2024-12-11 14:57:22.578398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.882 [2024-12-11 14:57:22.578411] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.882 [2024-12-11 14:57:22.578425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.882 [2024-12-11 14:57:22.578437] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e3990 is same with the state(6) to be set 00:20:39.882 [2024-12-11 14:57:22.578479] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.882 [2024-12-11 14:57:22.578498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.882 [2024-12-11 14:57:22.578513] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.882 [2024-12-11 14:57:22.578526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.882 [2024-12-11 14:57:22.578540] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.882 [2024-12-11 14:57:22.578566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.882 [2024-12-11 14:57:22.578581] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.882 [2024-12-11 14:57:22.578595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.882 [2024-12-11 14:57:22.578608] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12beef0 is same with the state(6) to be set 00:20:39.882 [2024-12-11 14:57:22.578655] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.882 [2024-12-11 14:57:22.578676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.882 [2024-12-11 14:57:22.578690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.882 [2024-12-11 14:57:22.578708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.882 [2024-12-11 14:57:22.578723] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.882 [2024-12-11 14:57:22.578737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.882 [2024-12-11 14:57:22.578750] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.882 [2024-12-11 14:57:22.578763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.882 [2024-12-11 14:57:22.578775] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171a250 is same with the state(6) to be set 00:20:39.882 [2024-12-11 14:57:22.578823] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.882 [2024-12-11 14:57:22.578843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.882 [2024-12-11 14:57:22.578858] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.882 [2024-12-11 14:57:22.578872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.882 [2024-12-11 14:57:22.578885] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.882 [2024-12-11 14:57:22.578898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.882 [2024-12-11 14:57:22.578911] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.882 [2024-12-11 14:57:22.578924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.882 [2024-12-11 14:57:22.578937] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231110 is same with the state(6) to be set 00:20:39.882 [2024-12-11 14:57:22.578983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.882 [2024-12-11 14:57:22.579003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.882 [2024-12-11 14:57:22.579017] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.882 [2024-12-11 14:57:22.579031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.882 [2024-12-11 14:57:22.579045] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.882 [2024-12-11 14:57:22.579058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.882 [2024-12-11 14:57:22.579072] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.882 [2024-12-11 14:57:22.579085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.882 [2024-12-11 14:57:22.579097] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e01f0 is same with the state(6) to be set 00:20:39.882 [2024-12-11 14:57:22.579124] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c8470 (9): Bad file descriptor 00:20:39.882 [2024-12-11 14:57:22.580376] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:39.882 [2024-12-11 14:57:22.581845] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:20:39.882 [2024-12-11 14:57:22.581885] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:20:39.882 [2024-12-11 14:57:22.581910] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e3990 (9): Bad file descriptor 00:20:39.882 [2024-12-11 14:57:22.582918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.882 [2024-12-11 14:57:22.582950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8470 with addr=10.0.0.2, port=4420 00:20:39.882 [2024-12-11 14:57:22.582967] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c8470 is same with the state(6) to be set 00:20:39.882 [2024-12-11 14:57:22.583364] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:39.882 [2024-12-11 14:57:22.583438] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:39.882 [2024-12-11 14:57:22.583777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.882 [2024-12-11 14:57:22.583803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.882 [2024-12-11 14:57:22.583839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.882 [2024-12-11 14:57:22.583855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.883 [2024-12-11 14:57:22.583872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.883 [2024-12-11 14:57:22.583887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.883 [2024-12-11 14:57:22.583902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.883 [2024-12-11 14:57:22.583917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.883 [2024-12-11 14:57:22.583932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.883 [2024-12-11 14:57:22.583946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.883 [2024-12-11 14:57:22.583962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.883 [2024-12-11 14:57:22.583976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.883 [2024-12-11 14:57:22.583991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.883 [2024-12-11 14:57:22.584006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.883 [2024-12-11 14:57:22.584021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.883 [2024-12-11 14:57:22.584036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.883 [2024-12-11 14:57:22.584051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.883 [2024-12-11 14:57:22.584066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.883 [2024-12-11 14:57:22.584081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.883 [2024-12-11 14:57:22.584101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.883 [2024-12-11 14:57:22.584361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.883 [2024-12-11 14:57:22.584389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e3990 with addr=10.0.0.2, port=4420 00:20:39.883 [2024-12-11 14:57:22.584405] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e3990 is same with the state(6) to be set 00:20:39.883 [2024-12-11 14:57:22.584424] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c8470 (9): Bad file descriptor 00:20:39.883 [2024-12-11 14:57:22.585438] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:39.883 [2024-12-11 14:57:22.585479] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:20:39.883 [2024-12-11 14:57:22.585511] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12beef0 (9): Bad file descriptor 00:20:39.883 [2024-12-11 14:57:22.585541] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e3990 (9): Bad file descriptor 00:20:39.883 [2024-12-11 14:57:22.585568] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:20:39.883 [2024-12-11 14:57:22.585582] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:20:39.883 [2024-12-11 14:57:22.585598] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:20:39.883 [2024-12-11 14:57:22.585613] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:20:39.883 [2024-12-11 14:57:22.585730] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:20:39.883 [2024-12-11 14:57:22.585752] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:20:39.883 [2024-12-11 14:57:22.585767] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:20:39.883 [2024-12-11 14:57:22.585780] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:20:39.883 [2024-12-11 14:57:22.586182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.883 [2024-12-11 14:57:22.586211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12beef0 with addr=10.0.0.2, port=4420 00:20:39.883 [2024-12-11 14:57:22.586227] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12beef0 is same with the state(6) to be set 00:20:39.883 [2024-12-11 14:57:22.586295] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12beef0 (9): Bad file descriptor 00:20:39.883 [2024-12-11 14:57:22.586366] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:20:39.883 [2024-12-11 14:57:22.586385] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:20:39.883 [2024-12-11 14:57:22.586399] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:20:39.883 [2024-12-11 14:57:22.586413] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:20:39.883 [2024-12-11 14:57:22.588031] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e4190 (9): Bad file descriptor 00:20:39.883 [2024-12-11 14:57:22.588074] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x171a250 (9): Bad file descriptor 00:20:39.883 [2024-12-11 14:57:22.588105] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1231110 (9): Bad file descriptor 00:20:39.883 [2024-12-11 14:57:22.588136] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e01f0 (9): Bad file descriptor 00:20:39.883 [2024-12-11 14:57:22.588284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.883 [2024-12-11 14:57:22.588309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.883 [2024-12-11 14:57:22.588334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.883 [2024-12-11 14:57:22.588349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.883 [2024-12-11 14:57:22.588367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.883 [2024-12-11 14:57:22.588381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.883 [2024-12-11 14:57:22.588398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.883 [2024-12-11 14:57:22.588412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.883 [2024-12-11 14:57:22.588428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.883 [2024-12-11 14:57:22.588442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.883 [2024-12-11 14:57:22.588458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.883 [2024-12-11 14:57:22.588472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.883 [2024-12-11 14:57:22.588488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.883 [2024-12-11 14:57:22.588503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.883 [2024-12-11 14:57:22.588518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.883 [2024-12-11 14:57:22.588533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.883 [2024-12-11 14:57:22.588556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.883 [2024-12-11 14:57:22.588573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.883 [2024-12-11 14:57:22.588590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.883 [2024-12-11 14:57:22.588604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.883 [2024-12-11 14:57:22.588620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.883 [2024-12-11 14:57:22.588635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.883 [2024-12-11 14:57:22.588650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.883 [2024-12-11 14:57:22.588665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.883 [2024-12-11 14:57:22.588681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.883 [2024-12-11 14:57:22.588700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.883 [2024-12-11 14:57:22.588717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.883 [2024-12-11 14:57:22.588732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.883 [2024-12-11 14:57:22.588748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.883 [2024-12-11 14:57:22.588763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.883 [2024-12-11 14:57:22.588778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.883 [2024-12-11 14:57:22.588793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.883 [2024-12-11 14:57:22.588808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.883 [2024-12-11 14:57:22.588822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.883 [2024-12-11 14:57:22.588838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.883 [2024-12-11 14:57:22.588853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.883 [2024-12-11 14:57:22.588869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.883 [2024-12-11 14:57:22.588883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.883 [2024-12-11 14:57:22.588899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.883 [2024-12-11 14:57:22.588913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.883 [2024-12-11 14:57:22.588929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.884 [2024-12-11 14:57:22.588943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.884 [2024-12-11 14:57:22.588959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.884 [2024-12-11 14:57:22.588973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.884 [2024-12-11 14:57:22.588989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.884 [2024-12-11 14:57:22.589003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.884 [2024-12-11 14:57:22.589019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.884 [2024-12-11 14:57:22.589033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.884 [2024-12-11 14:57:22.589049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.884 [2024-12-11 14:57:22.589063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.884 [2024-12-11 14:57:22.589083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.884 [2024-12-11 14:57:22.589098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.884 [2024-12-11 14:57:22.589115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.884 [2024-12-11 14:57:22.589129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.884 [2024-12-11 14:57:22.589145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.884 [2024-12-11 14:57:22.589160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.884 [2024-12-11 14:57:22.589175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.884 [2024-12-11 14:57:22.589190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.884 [2024-12-11 14:57:22.589206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.884 [2024-12-11 14:57:22.589219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.884 [2024-12-11 14:57:22.589235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.884 [2024-12-11 14:57:22.589250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.884 [2024-12-11 14:57:22.589266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.884 [2024-12-11 14:57:22.589281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.884 [2024-12-11 14:57:22.589297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.884 [2024-12-11 14:57:22.589311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.884 [2024-12-11 14:57:22.589327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.884 [2024-12-11 14:57:22.589342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.884 [2024-12-11 14:57:22.589358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.884 [2024-12-11 14:57:22.589372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.884 [2024-12-11 14:57:22.589389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.884 [2024-12-11 14:57:22.589404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.884 [2024-12-11 14:57:22.589420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.884 [2024-12-11 14:57:22.589435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.884 [2024-12-11 14:57:22.589451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.884 [2024-12-11 14:57:22.589469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.884 [2024-12-11 14:57:22.589486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.884 [2024-12-11 14:57:22.589501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.884 [2024-12-11 14:57:22.589517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.884 [2024-12-11 14:57:22.589531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.884 [2024-12-11 14:57:22.589553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.884 [2024-12-11 14:57:22.589570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.884 [2024-12-11 14:57:22.589586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.884 [2024-12-11 14:57:22.589601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.884 [2024-12-11 14:57:22.589617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.884 [2024-12-11 14:57:22.589631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.884 [2024-12-11 14:57:22.589647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.884 [2024-12-11 14:57:22.589662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.884 [2024-12-11 14:57:22.589678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.884 [2024-12-11 14:57:22.589693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.884 [2024-12-11 14:57:22.589709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.884 [2024-12-11 14:57:22.589723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.884 [2024-12-11 14:57:22.589739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.884 [2024-12-11 14:57:22.589754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.884 [2024-12-11 14:57:22.589770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.884 [2024-12-11 14:57:22.589784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.884 [2024-12-11 14:57:22.589800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.884 [2024-12-11 14:57:22.589814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.884 [2024-12-11 14:57:22.589830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.884 [2024-12-11 14:57:22.589845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.884 [2024-12-11 14:57:22.589868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.884 [2024-12-11 14:57:22.589883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.884 [2024-12-11 14:57:22.589900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.884 [2024-12-11 14:57:22.589915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.884 [2024-12-11 14:57:22.589931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.884 [2024-12-11 14:57:22.589946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.884 [2024-12-11 14:57:22.589962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.884 [2024-12-11 14:57:22.589976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.884 [2024-12-11 14:57:22.589993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.884 [2024-12-11 14:57:22.590008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.884 [2024-12-11 14:57:22.590024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.884 [2024-12-11 14:57:22.590038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.884 [2024-12-11 14:57:22.590054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.884 [2024-12-11 14:57:22.590068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.884 [2024-12-11 14:57:22.590085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.884 [2024-12-11 14:57:22.590100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.884 [2024-12-11 14:57:22.590116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.884 [2024-12-11 14:57:22.590130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.884 [2024-12-11 14:57:22.590146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.884 [2024-12-11 14:57:22.590161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.884 [2024-12-11 14:57:22.590176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.884 [2024-12-11 14:57:22.590191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.885 [2024-12-11 14:57:22.590207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.885 [2024-12-11 14:57:22.590221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.885 [2024-12-11 14:57:22.590237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.885 [2024-12-11 14:57:22.590255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.885 [2024-12-11 14:57:22.590272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.885 [2024-12-11 14:57:22.590287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.885 [2024-12-11 14:57:22.590302] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14cd140 is same with the state(6) to be set 00:20:39.885 [2024-12-11 14:57:22.591578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.885 [2024-12-11 14:57:22.591602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.885 [2024-12-11 14:57:22.591623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.885 [2024-12-11 14:57:22.591639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.885 [2024-12-11 14:57:22.591655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.885 [2024-12-11 14:57:22.591669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.885 [2024-12-11 14:57:22.591684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.885 [2024-12-11 14:57:22.591699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.885 [2024-12-11 14:57:22.591715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.885 [2024-12-11 14:57:22.591729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.885 [2024-12-11 14:57:22.591745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.885 [2024-12-11 14:57:22.591759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.885 [2024-12-11 14:57:22.591775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.885 [2024-12-11 14:57:22.591789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.885 [2024-12-11 14:57:22.591805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.885 [2024-12-11 14:57:22.591819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.885 [2024-12-11 14:57:22.591834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.885 [2024-12-11 14:57:22.591848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.885 [2024-12-11 14:57:22.591865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.885 [2024-12-11 14:57:22.591879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.885 [2024-12-11 14:57:22.591895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.885 [2024-12-11 14:57:22.591914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.885 [2024-12-11 14:57:22.591931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.885 [2024-12-11 14:57:22.591946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.885 [2024-12-11 14:57:22.591961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.885 [2024-12-11 14:57:22.591976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.885 [2024-12-11 14:57:22.591991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.885 [2024-12-11 14:57:22.592005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.885 [2024-12-11 14:57:22.592021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.885 [2024-12-11 14:57:22.592035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.885 [2024-12-11 14:57:22.592050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.885 [2024-12-11 14:57:22.592065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.885 [2024-12-11 14:57:22.592080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.885 [2024-12-11 14:57:22.592095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.885 [2024-12-11 14:57:22.592111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.885 [2024-12-11 14:57:22.592126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.885 [2024-12-11 14:57:22.592142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.885 [2024-12-11 14:57:22.592155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.885 [2024-12-11 14:57:22.592171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.885 [2024-12-11 14:57:22.592186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.885 [2024-12-11 14:57:22.592202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.885 [2024-12-11 14:57:22.592217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.885 [2024-12-11 14:57:22.592233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.885 [2024-12-11 14:57:22.592247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.885 [2024-12-11 14:57:22.592263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.885 [2024-12-11 14:57:22.592278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.885 [2024-12-11 14:57:22.592298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.885 [2024-12-11 14:57:22.592313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.885 [2024-12-11 14:57:22.592329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.885 [2024-12-11 14:57:22.592344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.885 [2024-12-11 14:57:22.592360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.885 [2024-12-11 14:57:22.592375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.885 [2024-12-11 14:57:22.592390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.885 [2024-12-11 14:57:22.592405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.885 [2024-12-11 14:57:22.592420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.885 [2024-12-11 14:57:22.592435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.885 [2024-12-11 14:57:22.592450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.885 [2024-12-11 14:57:22.592465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.885 [2024-12-11 14:57:22.592481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.885 [2024-12-11 14:57:22.592495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.885 [2024-12-11 14:57:22.592511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.885 [2024-12-11 14:57:22.592526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.885 [2024-12-11 14:57:22.592542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.885 [2024-12-11 14:57:22.592564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.886 [2024-12-11 14:57:22.592581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.886 [2024-12-11 14:57:22.592596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.886 [2024-12-11 14:57:22.592611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.886 [2024-12-11 14:57:22.592625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.886 [2024-12-11 14:57:22.592641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.886 [2024-12-11 14:57:22.592656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.886 [2024-12-11 14:57:22.592672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.886 [2024-12-11 14:57:22.592686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.886 [2024-12-11 14:57:22.592706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.886 [2024-12-11 14:57:22.592721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.886 [2024-12-11 14:57:22.592737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.886 [2024-12-11 14:57:22.592752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.886 [2024-12-11 14:57:22.592768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.886 [2024-12-11 14:57:22.592782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.886 [2024-12-11 14:57:22.592798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.886 [2024-12-11 14:57:22.592812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.886 [2024-12-11 14:57:22.592828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.886 [2024-12-11 14:57:22.592843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.886 [2024-12-11 14:57:22.592859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.886 [2024-12-11 14:57:22.592873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.886 [2024-12-11 14:57:22.592889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.886 [2024-12-11 14:57:22.592903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.886 [2024-12-11 14:57:22.592919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.886 [2024-12-11 14:57:22.592934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.886 [2024-12-11 14:57:22.592949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.886 [2024-12-11 14:57:22.592963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.886 [2024-12-11 14:57:22.592978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.886 [2024-12-11 14:57:22.592993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.886 [2024-12-11 14:57:22.593009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.886 [2024-12-11 14:57:22.593023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.886 [2024-12-11 14:57:22.593039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.886 [2024-12-11 14:57:22.593053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.886 [2024-12-11 14:57:22.593068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.886 [2024-12-11 14:57:22.593087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.886 [2024-12-11 14:57:22.593104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.886 [2024-12-11 14:57:22.593118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.886 [2024-12-11 14:57:22.593134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.886 [2024-12-11 14:57:22.593148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.886 [2024-12-11 14:57:22.593165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.886 [2024-12-11 14:57:22.593179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.886 [2024-12-11 14:57:22.593195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.886 [2024-12-11 14:57:22.593209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.886 [2024-12-11 14:57:22.593225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.886 [2024-12-11 14:57:22.593239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.886 [2024-12-11 14:57:22.593255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.886 [2024-12-11 14:57:22.593269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.886 [2024-12-11 14:57:22.593285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.886 [2024-12-11 14:57:22.593300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.886 [2024-12-11 14:57:22.593316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.886 [2024-12-11 14:57:22.593330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.886 [2024-12-11 14:57:22.593346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.886 [2024-12-11 14:57:22.593360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.886 [2024-12-11 14:57:22.593376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.886 [2024-12-11 14:57:22.593391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.886 [2024-12-11 14:57:22.593407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.886 [2024-12-11 14:57:22.593421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.886 [2024-12-11 14:57:22.593437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.886 [2024-12-11 14:57:22.593451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.886 [2024-12-11 14:57:22.593471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.886 [2024-12-11 14:57:22.593486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.886 [2024-12-11 14:57:22.593502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.886 [2024-12-11 14:57:22.593516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.886 [2024-12-11 14:57:22.593531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.886 [2024-12-11 14:57:22.593552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.886 [2024-12-11 14:57:22.593570] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ce2b0 is same with the state(6) to be set 00:20:39.886 [2024-12-11 14:57:22.594845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.886 [2024-12-11 14:57:22.594868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.886 [2024-12-11 14:57:22.594889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.886 [2024-12-11 14:57:22.594904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.886 [2024-12-11 14:57:22.594921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.886 [2024-12-11 14:57:22.594936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.886 [2024-12-11 14:57:22.594951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.886 [2024-12-11 14:57:22.594966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.886 [2024-12-11 14:57:22.594981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.886 [2024-12-11 14:57:22.594995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.886 [2024-12-11 14:57:22.595011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.886 [2024-12-11 14:57:22.595025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.886 [2024-12-11 14:57:22.595041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.886 [2024-12-11 14:57:22.595055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.886 [2024-12-11 14:57:22.595070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.886 [2024-12-11 14:57:22.595085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.887 [2024-12-11 14:57:22.595101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.887 [2024-12-11 14:57:22.595115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.887 [2024-12-11 14:57:22.595136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.887 [2024-12-11 14:57:22.595152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.887 [2024-12-11 14:57:22.595168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.887 [2024-12-11 14:57:22.595183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.887 [2024-12-11 14:57:22.595199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.887 [2024-12-11 14:57:22.595213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.887 [2024-12-11 14:57:22.595229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.887 [2024-12-11 14:57:22.595243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.887 [2024-12-11 14:57:22.595258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.887 [2024-12-11 14:57:22.595273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.887 [2024-12-11 14:57:22.595289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.887 [2024-12-11 14:57:22.595303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.887 [2024-12-11 14:57:22.595319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.887 [2024-12-11 14:57:22.595333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.887 [2024-12-11 14:57:22.595349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.887 [2024-12-11 14:57:22.595363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.887 [2024-12-11 14:57:22.595378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.887 [2024-12-11 14:57:22.595392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.887 [2024-12-11 14:57:22.595408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.887 [2024-12-11 14:57:22.595422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.887 [2024-12-11 14:57:22.595438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.887 [2024-12-11 14:57:22.595452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.887 [2024-12-11 14:57:22.595468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.887 [2024-12-11 14:57:22.595483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.887 [2024-12-11 14:57:22.595499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.887 [2024-12-11 14:57:22.595517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.887 [2024-12-11 14:57:22.595534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.887 [2024-12-11 14:57:22.595557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.887 [2024-12-11 14:57:22.595576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.887 [2024-12-11 14:57:22.595591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.887 [2024-12-11 14:57:22.595608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.887 [2024-12-11 14:57:22.595622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.887 [2024-12-11 14:57:22.595638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.887 [2024-12-11 14:57:22.595652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.887 [2024-12-11 14:57:22.595667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.887 [2024-12-11 14:57:22.595681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.887 [2024-12-11 14:57:22.595697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.887 [2024-12-11 14:57:22.595711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.887 [2024-12-11 14:57:22.595727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.887 [2024-12-11 14:57:22.595741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.887 [2024-12-11 14:57:22.595757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.887 [2024-12-11 14:57:22.595771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.887 [2024-12-11 14:57:22.595787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.887 [2024-12-11 14:57:22.595801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.887 [2024-12-11 14:57:22.595817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.887 [2024-12-11 14:57:22.595831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.887 [2024-12-11 14:57:22.595847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.887 [2024-12-11 14:57:22.595860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.887 [2024-12-11 14:57:22.595876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.887 [2024-12-11 14:57:22.595890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.887 [2024-12-11 14:57:22.595910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.887 [2024-12-11 14:57:22.595925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.887 [2024-12-11 14:57:22.595941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.887 [2024-12-11 14:57:22.595955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.887 [2024-12-11 14:57:22.595972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.887 [2024-12-11 14:57:22.595986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.887 [2024-12-11 14:57:22.596002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.887 [2024-12-11 14:57:22.596016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.887 [2024-12-11 14:57:22.596031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.887 [2024-12-11 14:57:22.596046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.887 [2024-12-11 14:57:22.596061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.887 [2024-12-11 14:57:22.596075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.887 [2024-12-11 14:57:22.596092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.887 [2024-12-11 14:57:22.596106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.887 [2024-12-11 14:57:22.596121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.887 [2024-12-11 14:57:22.596136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.887 [2024-12-11 14:57:22.596151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.887 [2024-12-11 14:57:22.596165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.887 [2024-12-11 14:57:22.596181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.887 [2024-12-11 14:57:22.596195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.887 [2024-12-11 14:57:22.596210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.887 [2024-12-11 14:57:22.596225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.887 [2024-12-11 14:57:22.596241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.887 [2024-12-11 14:57:22.596255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.887 [2024-12-11 14:57:22.596270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.887 [2024-12-11 14:57:22.596288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.887 [2024-12-11 14:57:22.596304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.887 [2024-12-11 14:57:22.596319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.887 [2024-12-11 14:57:22.596334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.888 [2024-12-11 14:57:22.596348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.888 [2024-12-11 14:57:22.596364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.888 [2024-12-11 14:57:22.596378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.888 [2024-12-11 14:57:22.596394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.888 [2024-12-11 14:57:22.596408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.888 [2024-12-11 14:57:22.596423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.888 [2024-12-11 14:57:22.596438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.888 [2024-12-11 14:57:22.596453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.888 [2024-12-11 14:57:22.596467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.888 [2024-12-11 14:57:22.596483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.888 [2024-12-11 14:57:22.596497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.888 [2024-12-11 14:57:22.596512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.888 [2024-12-11 14:57:22.596527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.888 [2024-12-11 14:57:22.596542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.888 [2024-12-11 14:57:22.596564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.888 [2024-12-11 14:57:22.596581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.888 [2024-12-11 14:57:22.596595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.888 [2024-12-11 14:57:22.596610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.888 [2024-12-11 14:57:22.596624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.888 [2024-12-11 14:57:22.596640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.888 [2024-12-11 14:57:22.596654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.888 [2024-12-11 14:57:22.596674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.888 [2024-12-11 14:57:22.596690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.888 [2024-12-11 14:57:22.596706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.888 [2024-12-11 14:57:22.596720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.888 [2024-12-11 14:57:22.596735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.888 [2024-12-11 14:57:22.596749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.888 [2024-12-11 14:57:22.596765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.888 [2024-12-11 14:57:22.596778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.888 [2024-12-11 14:57:22.596794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.888 [2024-12-11 14:57:22.596808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.888 [2024-12-11 14:57:22.596822] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16d04d0 is same with the state(6) to be set 00:20:39.888 [2024-12-11 14:57:22.598030] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:20:39.888 [2024-12-11 14:57:22.598066] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:20:39.888 [2024-12-11 14:57:22.598087] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:20:39.888 [2024-12-11 14:57:22.598467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.888 [2024-12-11 14:57:22.598497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9320 with addr=10.0.0.2, port=4420 00:20:39.888 [2024-12-11 14:57:22.598514] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9320 is same with the state(6) to be set 00:20:39.888 [2024-12-11 14:57:22.598622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.888 [2024-12-11 14:57:22.598647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12bd330 with addr=10.0.0.2, port=4420 00:20:39.888 [2024-12-11 14:57:22.598663] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bd330 is same with the state(6) to be set 00:20:39.888 [2024-12-11 14:57:22.598743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:39.888 [2024-12-11 14:57:22.598767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1720c70 with addr=10.0.0.2, port=4420 00:20:39.888 [2024-12-11 14:57:22.598783] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1720c70 is same with the state(6) to be set 00:20:39.888 [2024-12-11 14:57:22.599674] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:20:39.888 [2024-12-11 14:57:22.599703] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:20:39.888 [2024-12-11 14:57:22.599723] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:20:39.888 [2024-12-11 14:57:22.599785] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9320 (9): Bad file descriptor 00:20:39.888 [2024-12-11 14:57:22.599810] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12bd330 (9): Bad file descriptor 00:20:39.888 [2024-12-11 14:57:22.599834] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1720c70 (9): Bad file descriptor 00:20:39.888 [2024-12-11 14:57:22.599918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.888 [2024-12-11 14:57:22.599941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.888 [2024-12-11 14:57:22.599963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.888 [2024-12-11 14:57:22.599979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.888 [2024-12-11 14:57:22.599995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.888 [2024-12-11 14:57:22.600010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.888 [2024-12-11 14:57:22.600026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.888 [2024-12-11 14:57:22.600040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.888 [2024-12-11 14:57:22.600055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.888 [2024-12-11 14:57:22.600070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.888 [2024-12-11 14:57:22.600086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.888 [2024-12-11 14:57:22.600099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.888 [2024-12-11 14:57:22.600115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.888 [2024-12-11 14:57:22.600129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.888 [2024-12-11 14:57:22.600145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.888 [2024-12-11 14:57:22.600159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.888 [2024-12-11 14:57:22.600175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.888 [2024-12-11 14:57:22.600189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.888 [2024-12-11 14:57:22.600205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.888 [2024-12-11 14:57:22.600220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.888 [2024-12-11 14:57:22.600236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.888 [2024-12-11 14:57:22.600250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.888 [2024-12-11 14:57:22.600266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.888 [2024-12-11 14:57:22.600280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.888 [2024-12-11 14:57:22.600301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.888 [2024-12-11 14:57:22.600317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.888 [2024-12-11 14:57:22.600334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.888 [2024-12-11 14:57:22.600348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.888 [2024-12-11 14:57:22.600364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.888 [2024-12-11 14:57:22.600378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.888 [2024-12-11 14:57:22.600393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.888 [2024-12-11 14:57:22.600408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.888 [2024-12-11 14:57:22.600423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.889 [2024-12-11 14:57:22.600437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.889 [2024-12-11 14:57:22.600453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.889 [2024-12-11 14:57:22.600467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.889 [2024-12-11 14:57:22.600483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.889 [2024-12-11 14:57:22.600497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.889 [2024-12-11 14:57:22.600513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.889 [2024-12-11 14:57:22.600527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.889 [2024-12-11 14:57:22.600543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.889 [2024-12-11 14:57:22.600568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.889 [2024-12-11 14:57:22.600585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.889 [2024-12-11 14:57:22.600599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.889 [2024-12-11 14:57:22.600615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.889 [2024-12-11 14:57:22.600630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.889 [2024-12-11 14:57:22.600645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.889 [2024-12-11 14:57:22.600659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.889 [2024-12-11 14:57:22.600675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.889 [2024-12-11 14:57:22.600694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.889 [2024-12-11 14:57:22.600711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.889 [2024-12-11 14:57:22.600725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.889 [2024-12-11 14:57:22.600741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.889 [2024-12-11 14:57:22.600755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.889 [2024-12-11 14:57:22.600771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.889 [2024-12-11 14:57:22.600785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.889 [2024-12-11 14:57:22.600801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.889 [2024-12-11 14:57:22.600815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.889 [2024-12-11 14:57:22.600831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.889 [2024-12-11 14:57:22.600845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.889 [2024-12-11 14:57:22.600861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.889 [2024-12-11 14:57:22.600875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.889 [2024-12-11 14:57:22.600891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.889 [2024-12-11 14:57:22.600906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.889 [2024-12-11 14:57:22.600922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.889 [2024-12-11 14:57:22.600936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.889 [2024-12-11 14:57:22.600952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.889 [2024-12-11 14:57:22.600966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.889 [2024-12-11 14:57:22.600982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.889 [2024-12-11 14:57:22.600996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.889 [2024-12-11 14:57:22.601012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.889 [2024-12-11 14:57:22.601026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.889 [2024-12-11 14:57:22.601042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.889 [2024-12-11 14:57:22.601056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.889 [2024-12-11 14:57:22.601080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.889 [2024-12-11 14:57:22.601095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.889 [2024-12-11 14:57:22.601111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.889 [2024-12-11 14:57:22.601125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.889 [2024-12-11 14:57:22.601141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.889 [2024-12-11 14:57:22.601155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.889 [2024-12-11 14:57:22.601171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.889 [2024-12-11 14:57:22.601186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.889 [2024-12-11 14:57:22.601202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.889 [2024-12-11 14:57:22.601217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.889 [2024-12-11 14:57:22.601233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.889 [2024-12-11 14:57:22.601247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.889 [2024-12-11 14:57:22.601263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.889 [2024-12-11 14:57:22.601277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.889 [2024-12-11 14:57:22.601293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.889 [2024-12-11 14:57:22.601308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.889 [2024-12-11 14:57:22.601323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.889 [2024-12-11 14:57:22.601337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.889 [2024-12-11 14:57:22.601353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.889 [2024-12-11 14:57:22.601368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.889 [2024-12-11 14:57:22.601383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.889 [2024-12-11 14:57:22.601398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.889 [2024-12-11 14:57:22.601415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.889 [2024-12-11 14:57:22.601429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.889 [2024-12-11 14:57:22.601446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.889 [2024-12-11 14:57:22.601464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.889 [2024-12-11 14:57:22.601481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.889 [2024-12-11 14:57:22.601495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.889 [2024-12-11 14:57:22.601511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.889 [2024-12-11 14:57:22.601526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.890 [2024-12-11 14:57:22.601542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.890 [2024-12-11 14:57:22.601564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.890 [2024-12-11 14:57:22.601581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.890 [2024-12-11 14:57:22.601595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.890 [2024-12-11 14:57:22.601611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.890 [2024-12-11 14:57:22.601625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.890 [2024-12-11 14:57:22.601641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.890 [2024-12-11 14:57:22.601656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.890 [2024-12-11 14:57:22.601672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.890 [2024-12-11 14:57:22.601686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.890 [2024-12-11 14:57:22.601703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.890 [2024-12-11 14:57:22.601717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.890 [2024-12-11 14:57:22.601734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.890 [2024-12-11 14:57:22.601748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.890 [2024-12-11 14:57:22.601763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.890 [2024-12-11 14:57:22.601778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.890 [2024-12-11 14:57:22.601794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.890 [2024-12-11 14:57:22.601807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.890 [2024-12-11 14:57:22.601823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.890 [2024-12-11 14:57:22.601837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.890 [2024-12-11 14:57:22.601857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.890 [2024-12-11 14:57:22.601872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.890 [2024-12-11 14:57:22.601888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.890 [2024-12-11 14:57:22.601904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.890 [2024-12-11 14:57:22.601918] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16c9220 is same with the state(6) to be set 00:20:39.890 [2024-12-11 14:57:22.603157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.890 [2024-12-11 14:57:22.603181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.890 [2024-12-11 14:57:22.603202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.890 [2024-12-11 14:57:22.603217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.890 [2024-12-11 14:57:22.603234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.890 [2024-12-11 14:57:22.603248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.890 [2024-12-11 14:57:22.603264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.890 [2024-12-11 14:57:22.603278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.890 [2024-12-11 14:57:22.603293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.890 [2024-12-11 14:57:22.603308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.890 [2024-12-11 14:57:22.603323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.890 [2024-12-11 14:57:22.603338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.890 [2024-12-11 14:57:22.603354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.890 [2024-12-11 14:57:22.603368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.890 [2024-12-11 14:57:22.603384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.890 [2024-12-11 14:57:22.603398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.890 [2024-12-11 14:57:22.603414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.890 [2024-12-11 14:57:22.603429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.890 [2024-12-11 14:57:22.603445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.890 [2024-12-11 14:57:22.603460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.890 [2024-12-11 14:57:22.603475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.890 [2024-12-11 14:57:22.603495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.890 [2024-12-11 14:57:22.603512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.890 [2024-12-11 14:57:22.603527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.890 [2024-12-11 14:57:22.603543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.890 [2024-12-11 14:57:22.603566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.890 [2024-12-11 14:57:22.603582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.890 [2024-12-11 14:57:22.603597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.890 [2024-12-11 14:57:22.603613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.890 [2024-12-11 14:57:22.603627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.890 [2024-12-11 14:57:22.603643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.890 [2024-12-11 14:57:22.603657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.890 [2024-12-11 14:57:22.603673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.890 [2024-12-11 14:57:22.603687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.890 [2024-12-11 14:57:22.603703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.890 [2024-12-11 14:57:22.603717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.890 [2024-12-11 14:57:22.603733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.890 [2024-12-11 14:57:22.603747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.890 [2024-12-11 14:57:22.603763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.890 [2024-12-11 14:57:22.603777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.890 [2024-12-11 14:57:22.603793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.890 [2024-12-11 14:57:22.603807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.890 [2024-12-11 14:57:22.603823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.890 [2024-12-11 14:57:22.603837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.890 [2024-12-11 14:57:22.603853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.890 [2024-12-11 14:57:22.603866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.890 [2024-12-11 14:57:22.603886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.890 [2024-12-11 14:57:22.603902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.890 [2024-12-11 14:57:22.603918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.890 [2024-12-11 14:57:22.603932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.890 [2024-12-11 14:57:22.603948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.890 [2024-12-11 14:57:22.603963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.890 [2024-12-11 14:57:22.603978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.890 [2024-12-11 14:57:22.603992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.890 [2024-12-11 14:57:22.604008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.890 [2024-12-11 14:57:22.604023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.891 [2024-12-11 14:57:22.604039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.891 [2024-12-11 14:57:22.604053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.891 [2024-12-11 14:57:22.604069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.891 [2024-12-11 14:57:22.604083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.891 [2024-12-11 14:57:22.604099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.891 [2024-12-11 14:57:22.604113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.891 [2024-12-11 14:57:22.604129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.891 [2024-12-11 14:57:22.604143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.891 [2024-12-11 14:57:22.604159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.891 [2024-12-11 14:57:22.604174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.891 [2024-12-11 14:57:22.604190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.891 [2024-12-11 14:57:22.604204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.891 [2024-12-11 14:57:22.604220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.891 [2024-12-11 14:57:22.604234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.891 [2024-12-11 14:57:22.604250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.891 [2024-12-11 14:57:22.604268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.891 [2024-12-11 14:57:22.604285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.891 [2024-12-11 14:57:22.604300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.891 [2024-12-11 14:57:22.604315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.891 [2024-12-11 14:57:22.604329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.891 [2024-12-11 14:57:22.604346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.891 [2024-12-11 14:57:22.604360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.891 [2024-12-11 14:57:22.604375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.891 [2024-12-11 14:57:22.604389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.891 [2024-12-11 14:57:22.604405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.891 [2024-12-11 14:57:22.604420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.891 [2024-12-11 14:57:22.604435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.891 [2024-12-11 14:57:22.604449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.891 [2024-12-11 14:57:22.604465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.891 [2024-12-11 14:57:22.604479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.891 [2024-12-11 14:57:22.604495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.891 [2024-12-11 14:57:22.604508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.891 [2024-12-11 14:57:22.604524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.891 [2024-12-11 14:57:22.604538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.891 [2024-12-11 14:57:22.604561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.891 [2024-12-11 14:57:22.604577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.891 [2024-12-11 14:57:22.604593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.891 [2024-12-11 14:57:22.604606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.891 [2024-12-11 14:57:22.604622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.891 [2024-12-11 14:57:22.604636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.891 [2024-12-11 14:57:22.604657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.891 [2024-12-11 14:57:22.604672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.891 [2024-12-11 14:57:22.604687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.891 [2024-12-11 14:57:22.604702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.891 [2024-12-11 14:57:22.604717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.891 [2024-12-11 14:57:22.604732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.891 [2024-12-11 14:57:22.604748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.891 [2024-12-11 14:57:22.604762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.891 [2024-12-11 14:57:22.604779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.891 [2024-12-11 14:57:22.604793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.891 [2024-12-11 14:57:22.604809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.891 [2024-12-11 14:57:22.604823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.891 [2024-12-11 14:57:22.604839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.891 [2024-12-11 14:57:22.604854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.891 [2024-12-11 14:57:22.604869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.891 [2024-12-11 14:57:22.604884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.891 [2024-12-11 14:57:22.604900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.891 [2024-12-11 14:57:22.604914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.891 [2024-12-11 14:57:22.604930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.891 [2024-12-11 14:57:22.604944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.891 [2024-12-11 14:57:22.604960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.891 [2024-12-11 14:57:22.604974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.891 [2024-12-11 14:57:22.604989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.891 [2024-12-11 14:57:22.605003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.891 [2024-12-11 14:57:22.605019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.891 [2024-12-11 14:57:22.605037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.891 [2024-12-11 14:57:22.605055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.891 [2024-12-11 14:57:22.605070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.891 [2024-12-11 14:57:22.605085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.891 [2024-12-11 14:57:22.605099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.891 [2024-12-11 14:57:22.605116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.891 [2024-12-11 14:57:22.605130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.891 [2024-12-11 14:57:22.605144] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ca4f0 is same with the state(6) to be set 00:20:39.891 [2024-12-11 14:57:22.606375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.891 [2024-12-11 14:57:22.606398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.891 [2024-12-11 14:57:22.606419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.891 [2024-12-11 14:57:22.606435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.891 [2024-12-11 14:57:22.606451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.891 [2024-12-11 14:57:22.606466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.891 [2024-12-11 14:57:22.606483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.891 [2024-12-11 14:57:22.606498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.891 [2024-12-11 14:57:22.606514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.892 [2024-12-11 14:57:22.606528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.892 [2024-12-11 14:57:22.606554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.892 [2024-12-11 14:57:22.606571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.892 [2024-12-11 14:57:22.606587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.892 [2024-12-11 14:57:22.606602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.892 [2024-12-11 14:57:22.606617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.892 [2024-12-11 14:57:22.606631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.892 [2024-12-11 14:57:22.606647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.892 [2024-12-11 14:57:22.606666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.892 [2024-12-11 14:57:22.606683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.892 [2024-12-11 14:57:22.606698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.892 [2024-12-11 14:57:22.606714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.892 [2024-12-11 14:57:22.606728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.892 [2024-12-11 14:57:22.606744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.892 [2024-12-11 14:57:22.606758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.892 [2024-12-11 14:57:22.606775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.892 [2024-12-11 14:57:22.606789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.892 [2024-12-11 14:57:22.606805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.892 [2024-12-11 14:57:22.606820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.892 [2024-12-11 14:57:22.606835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.892 [2024-12-11 14:57:22.606850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.892 [2024-12-11 14:57:22.606866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.892 [2024-12-11 14:57:22.606880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.892 [2024-12-11 14:57:22.606896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.892 [2024-12-11 14:57:22.606911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.892 [2024-12-11 14:57:22.606927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.892 [2024-12-11 14:57:22.606942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.892 [2024-12-11 14:57:22.606957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.892 [2024-12-11 14:57:22.606971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.892 [2024-12-11 14:57:22.606987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.892 [2024-12-11 14:57:22.607001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.892 [2024-12-11 14:57:22.607017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.892 [2024-12-11 14:57:22.607030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.892 [2024-12-11 14:57:22.607053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.892 [2024-12-11 14:57:22.607068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.892 [2024-12-11 14:57:22.607084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.892 [2024-12-11 14:57:22.607099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.892 [2024-12-11 14:57:22.607114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.892 [2024-12-11 14:57:22.607128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.892 [2024-12-11 14:57:22.607143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.892 [2024-12-11 14:57:22.607158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.892 [2024-12-11 14:57:22.607174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.892 [2024-12-11 14:57:22.607188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.892 [2024-12-11 14:57:22.607204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.892 [2024-12-11 14:57:22.607218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.892 [2024-12-11 14:57:22.607234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.892 [2024-12-11 14:57:22.607248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.892 [2024-12-11 14:57:22.607263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.892 [2024-12-11 14:57:22.607277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.892 [2024-12-11 14:57:22.607293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.892 [2024-12-11 14:57:22.607307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.892 [2024-12-11 14:57:22.607322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.892 [2024-12-11 14:57:22.607336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.892 [2024-12-11 14:57:22.607352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.892 [2024-12-11 14:57:22.607366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.892 [2024-12-11 14:57:22.607382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.892 [2024-12-11 14:57:22.607396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.892 [2024-12-11 14:57:22.607411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.892 [2024-12-11 14:57:22.607429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.892 [2024-12-11 14:57:22.607446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.892 [2024-12-11 14:57:22.607460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.892 [2024-12-11 14:57:22.607476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.892 [2024-12-11 14:57:22.607490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.892 [2024-12-11 14:57:22.607505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.892 [2024-12-11 14:57:22.607519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.892 [2024-12-11 14:57:22.607535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.892 [2024-12-11 14:57:22.607557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.892 [2024-12-11 14:57:22.607575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.892 [2024-12-11 14:57:22.607589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.892 [2024-12-11 14:57:22.607605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.892 [2024-12-11 14:57:22.607619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.892 [2024-12-11 14:57:22.607636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.892 [2024-12-11 14:57:22.607649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.892 [2024-12-11 14:57:22.607665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.892 [2024-12-11 14:57:22.607679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.892 [2024-12-11 14:57:22.607694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.892 [2024-12-11 14:57:22.607708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.892 [2024-12-11 14:57:22.607724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.892 [2024-12-11 14:57:22.607738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.892 [2024-12-11 14:57:22.607754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.892 [2024-12-11 14:57:22.607768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.893 [2024-12-11 14:57:22.607784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.893 [2024-12-11 14:57:22.607798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.893 [2024-12-11 14:57:22.607819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.893 [2024-12-11 14:57:22.607834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.893 [2024-12-11 14:57:22.607849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.893 [2024-12-11 14:57:22.607863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.893 [2024-12-11 14:57:22.607879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.893 [2024-12-11 14:57:22.607893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.893 [2024-12-11 14:57:22.607909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.893 [2024-12-11 14:57:22.607922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.893 [2024-12-11 14:57:22.607938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.893 [2024-12-11 14:57:22.607952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.893 [2024-12-11 14:57:22.607968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.893 [2024-12-11 14:57:22.607982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.893 [2024-12-11 14:57:22.607998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.893 [2024-12-11 14:57:22.608012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.893 [2024-12-11 14:57:22.608028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.893 [2024-12-11 14:57:22.608042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.893 [2024-12-11 14:57:22.608058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.893 [2024-12-11 14:57:22.608071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.893 [2024-12-11 14:57:22.608087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.893 [2024-12-11 14:57:22.608101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.893 [2024-12-11 14:57:22.608117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.893 [2024-12-11 14:57:22.608130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.893 [2024-12-11 14:57:22.608146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.893 [2024-12-11 14:57:22.608160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.893 [2024-12-11 14:57:22.608175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.893 [2024-12-11 14:57:22.608194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.893 [2024-12-11 14:57:22.608210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.893 [2024-12-11 14:57:22.608224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.893 [2024-12-11 14:57:22.608240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.893 [2024-12-11 14:57:22.608254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.893 [2024-12-11 14:57:22.608271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.893 [2024-12-11 14:57:22.608285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.893 [2024-12-11 14:57:22.608301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.893 [2024-12-11 14:57:22.608315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.893 [2024-12-11 14:57:22.608330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.893 [2024-12-11 14:57:22.608344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.893 [2024-12-11 14:57:22.608359] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cb820 is same with the state(6) to be set 00:20:39.893 [2024-12-11 14:57:22.609611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.893 [2024-12-11 14:57:22.609634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.893 [2024-12-11 14:57:22.609655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.893 [2024-12-11 14:57:22.609671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.893 [2024-12-11 14:57:22.609688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.893 [2024-12-11 14:57:22.609703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.893 [2024-12-11 14:57:22.609719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.893 [2024-12-11 14:57:22.609734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.893 [2024-12-11 14:57:22.609749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.893 [2024-12-11 14:57:22.609764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.893 [2024-12-11 14:57:22.609780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.893 [2024-12-11 14:57:22.609794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.893 [2024-12-11 14:57:22.609809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.893 [2024-12-11 14:57:22.609829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.893 [2024-12-11 14:57:22.609846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.893 [2024-12-11 14:57:22.609860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.893 [2024-12-11 14:57:22.609876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.893 [2024-12-11 14:57:22.609890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.893 [2024-12-11 14:57:22.609907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.893 [2024-12-11 14:57:22.609921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.893 [2024-12-11 14:57:22.609938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.893 [2024-12-11 14:57:22.609952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.893 [2024-12-11 14:57:22.609968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.893 [2024-12-11 14:57:22.609982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.893 [2024-12-11 14:57:22.609998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.893 [2024-12-11 14:57:22.610012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.893 [2024-12-11 14:57:22.610028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.893 [2024-12-11 14:57:22.610042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.893 [2024-12-11 14:57:22.610059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.893 [2024-12-11 14:57:22.610073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.893 [2024-12-11 14:57:22.610089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.893 [2024-12-11 14:57:22.610103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.893 [2024-12-11 14:57:22.610118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.893 [2024-12-11 14:57:22.610132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.894 [2024-12-11 14:57:22.610149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.894 [2024-12-11 14:57:22.610163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.894 [2024-12-11 14:57:22.610180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.894 [2024-12-11 14:57:22.610194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.894 [2024-12-11 14:57:22.610213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.894 [2024-12-11 14:57:22.610229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.894 [2024-12-11 14:57:22.610244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.894 [2024-12-11 14:57:22.610258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.894 [2024-12-11 14:57:22.610274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.894 [2024-12-11 14:57:22.610288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.894 [2024-12-11 14:57:22.610304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.894 [2024-12-11 14:57:22.610318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.894 [2024-12-11 14:57:22.610333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.894 [2024-12-11 14:57:22.610348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.894 [2024-12-11 14:57:22.610363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.894 [2024-12-11 14:57:22.610378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.894 [2024-12-11 14:57:22.610394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.894 [2024-12-11 14:57:22.610408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.894 [2024-12-11 14:57:22.610423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.894 [2024-12-11 14:57:22.610437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.894 [2024-12-11 14:57:22.610453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.894 [2024-12-11 14:57:22.610468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.894 [2024-12-11 14:57:22.610484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.894 [2024-12-11 14:57:22.610497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.894 [2024-12-11 14:57:22.610513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.894 [2024-12-11 14:57:22.610527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.894 [2024-12-11 14:57:22.610542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.894 [2024-12-11 14:57:22.610572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.894 [2024-12-11 14:57:22.610587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.894 [2024-12-11 14:57:22.610610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.894 [2024-12-11 14:57:22.610627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.894 [2024-12-11 14:57:22.610641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.894 [2024-12-11 14:57:22.610657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.894 [2024-12-11 14:57:22.610672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.894 [2024-12-11 14:57:22.610687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.894 [2024-12-11 14:57:22.610701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.894 [2024-12-11 14:57:22.610717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.894 [2024-12-11 14:57:22.610730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.894 [2024-12-11 14:57:22.610747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.894 [2024-12-11 14:57:22.610761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.894 [2024-12-11 14:57:22.610776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.894 [2024-12-11 14:57:22.610790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.894 [2024-12-11 14:57:22.610805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.894 [2024-12-11 14:57:22.610820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.894 [2024-12-11 14:57:22.610836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.894 [2024-12-11 14:57:22.610850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.894 [2024-12-11 14:57:22.610866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.894 [2024-12-11 14:57:22.610880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.894 [2024-12-11 14:57:22.610896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.894 [2024-12-11 14:57:22.610910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.894 [2024-12-11 14:57:22.610926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.894 [2024-12-11 14:57:22.610939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.894 [2024-12-11 14:57:22.610955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.894 [2024-12-11 14:57:22.610970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.894 [2024-12-11 14:57:22.610985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.894 [2024-12-11 14:57:22.611003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.894 [2024-12-11 14:57:22.611020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.894 [2024-12-11 14:57:22.611034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.894 [2024-12-11 14:57:22.611050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.894 [2024-12-11 14:57:22.611064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.894 [2024-12-11 14:57:22.611080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.894 [2024-12-11 14:57:22.611093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.894 [2024-12-11 14:57:22.611109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.894 [2024-12-11 14:57:22.611123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.894 [2024-12-11 14:57:22.611139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.894 [2024-12-11 14:57:22.611153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.894 [2024-12-11 14:57:22.611169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.894 [2024-12-11 14:57:22.611183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.894 [2024-12-11 14:57:22.611199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.894 [2024-12-11 14:57:22.611213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.894 [2024-12-11 14:57:22.611229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.894 [2024-12-11 14:57:22.611243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.894 [2024-12-11 14:57:22.611258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.894 [2024-12-11 14:57:22.611272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.894 [2024-12-11 14:57:22.611287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.894 [2024-12-11 14:57:22.611301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.894 [2024-12-11 14:57:22.611317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.894 [2024-12-11 14:57:22.611331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.895 [2024-12-11 14:57:22.611346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.895 [2024-12-11 14:57:22.611361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.895 [2024-12-11 14:57:22.611380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.895 [2024-12-11 14:57:22.611395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.895 [2024-12-11 14:57:22.611411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.895 [2024-12-11 14:57:22.611425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.895 [2024-12-11 14:57:22.611441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.895 [2024-12-11 14:57:22.611455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.895 [2024-12-11 14:57:22.611471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.895 [2024-12-11 14:57:22.611485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.895 [2024-12-11 14:57:22.611500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.895 [2024-12-11 14:57:22.611514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.895 [2024-12-11 14:57:22.611530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.895 [2024-12-11 14:57:22.611551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.895 [2024-12-11 14:57:22.611570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:39.895 [2024-12-11 14:57:22.611584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.895 [2024-12-11 14:57:22.611598] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cf150 is same with the state(6) to be set 00:20:39.895 [2024-12-11 14:57:22.613318] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:20:39.895 [2024-12-11 14:57:22.613357] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:20:39.895 [2024-12-11 14:57:22.613380] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:20:40.154 task offset: 16384 on job bdev=Nvme3n1 fails 00:20:40.155 00:20:40.155 Latency(us) 00:20:40.155 [2024-12-11T13:57:22.928Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:40.155 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:40.155 Job: Nvme1n1 ended in about 0.71 seconds with error 00:20:40.155 Verification LBA range: start 0x0 length 0x400 00:20:40.155 Nvme1n1 : 0.71 187.65 11.73 90.30 0.00 226996.30 24175.50 248551.35 00:20:40.155 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:40.155 Job: Nvme2n1 ended in about 0.71 seconds with error 00:20:40.155 Verification LBA range: start 0x0 length 0x400 00:20:40.155 Nvme2n1 : 0.71 186.80 11.68 89.89 0.00 222127.24 15534.46 256318.58 00:20:40.155 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:40.155 Job: Nvme3n1 ended in about 0.70 seconds with error 00:20:40.155 Verification LBA range: start 0x0 length 0x400 00:20:40.155 Nvme3n1 : 0.70 183.48 11.47 91.74 0.00 217048.56 11990.66 254765.13 00:20:40.155 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:40.155 Job: Nvme4n1 ended in about 0.72 seconds with error 00:20:40.155 Verification LBA range: start 0x0 length 0x400 00:20:40.155 Nvme4n1 : 0.72 183.25 11.45 88.85 0.00 214038.20 17961.72 234570.33 00:20:40.155 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:40.155 Job: Nvme5n1 ended in about 0.72 seconds with error 00:20:40.155 Verification LBA range: start 0x0 length 0x400 00:20:40.155 Nvme5n1 : 0.72 176.91 11.06 88.45 0.00 213487.31 20097.71 262532.36 00:20:40.155 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:40.155 Job: Nvme6n1 ended in about 0.73 seconds with error 00:20:40.155 Verification LBA range: start 0x0 length 0x400 00:20:40.155 Nvme6n1 : 0.73 88.06 5.50 88.06 0.00 312923.78 35923.44 292047.83 00:20:40.155 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:40.155 Job: Nvme7n1 ended in about 0.70 seconds with error 00:20:40.155 Verification LBA range: start 0x0 length 0x400 00:20:40.155 Nvme7n1 : 0.70 183.12 11.45 91.56 0.00 193321.59 9272.13 251658.24 00:20:40.155 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:40.155 Job: Nvme8n1 ended in about 0.70 seconds with error 00:20:40.155 Verification LBA range: start 0x0 length 0x400 00:20:40.155 Nvme8n1 : 0.70 182.15 11.38 14.23 0.00 262016.48 6699.24 242337.56 00:20:40.155 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:40.155 Job: Nvme9n1 ended in about 0.73 seconds with error 00:20:40.155 Verification LBA range: start 0x0 length 0x400 00:20:40.155 Nvme9n1 : 0.73 87.67 5.48 87.67 0.00 288346.26 22622.06 264085.81 00:20:40.155 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:40.155 Job: Nvme10n1 ended in about 0.72 seconds with error 00:20:40.155 Verification LBA range: start 0x0 length 0x400 00:20:40.155 Nvme10n1 : 0.72 89.48 5.59 89.48 0.00 272354.42 25243.50 287387.50 00:20:40.155 [2024-12-11T13:57:22.928Z] =================================================================================================================== 00:20:40.155 [2024-12-11T13:57:22.928Z] Total : 1548.58 96.79 820.24 0.00 235895.53 6699.24 292047.83 00:20:40.155 [2024-12-11 14:57:22.640346] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:40.155 [2024-12-11 14:57:22.640433] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:20:40.155 [2024-12-11 14:57:22.640769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.155 [2024-12-11 14:57:22.640807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8470 with addr=10.0.0.2, port=4420 00:20:40.155 [2024-12-11 14:57:22.640829] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c8470 is same with the state(6) to be set 00:20:40.155 [2024-12-11 14:57:22.640923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.155 [2024-12-11 14:57:22.640950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e3990 with addr=10.0.0.2, port=4420 00:20:40.155 [2024-12-11 14:57:22.640966] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e3990 is same with the state(6) to be set 00:20:40.155 [2024-12-11 14:57:22.641047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.155 [2024-12-11 14:57:22.641072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12beef0 with addr=10.0.0.2, port=4420 00:20:40.155 [2024-12-11 14:57:22.641089] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12beef0 is same with the state(6) to be set 00:20:40.155 [2024-12-11 14:57:22.641106] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:20:40.155 [2024-12-11 14:57:22.641119] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:20:40.155 [2024-12-11 14:57:22.641137] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:20:40.155 [2024-12-11 14:57:22.641167] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:20:40.155 [2024-12-11 14:57:22.641186] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:20:40.155 [2024-12-11 14:57:22.641199] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:20:40.155 [2024-12-11 14:57:22.641213] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:20:40.155 [2024-12-11 14:57:22.641225] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:20:40.155 [2024-12-11 14:57:22.641239] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:20:40.155 [2024-12-11 14:57:22.641252] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:20:40.155 [2024-12-11 14:57:22.641265] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:20:40.155 [2024-12-11 14:57:22.641277] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:20:40.155 [2024-12-11 14:57:22.641367] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12beef0 (9): Bad file descriptor 00:20:40.155 [2024-12-11 14:57:22.641401] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e3990 (9): Bad file descriptor 00:20:40.155 [2024-12-11 14:57:22.641425] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c8470 (9): Bad file descriptor 00:20:40.155 [2024-12-11 14:57:22.641737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.155 [2024-12-11 14:57:22.641767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e01f0 with addr=10.0.0.2, port=4420 00:20:40.155 [2024-12-11 14:57:22.641784] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e01f0 is same with the state(6) to be set 00:20:40.155 [2024-12-11 14:57:22.641867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.155 [2024-12-11 14:57:22.641892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e4190 with addr=10.0.0.2, port=4420 00:20:40.155 [2024-12-11 14:57:22.641908] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e4190 is same with the state(6) to be set 00:20:40.155 [2024-12-11 14:57:22.641992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.155 [2024-12-11 14:57:22.642017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1231110 with addr=10.0.0.2, port=4420 00:20:40.155 [2024-12-11 14:57:22.642033] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231110 is same with the state(6) to be set 00:20:40.155 [2024-12-11 14:57:22.642124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.155 [2024-12-11 14:57:22.642148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x171a250 with addr=10.0.0.2, port=4420 00:20:40.155 [2024-12-11 14:57:22.642164] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171a250 is same with the state(6) to be set 00:20:40.155 [2024-12-11 14:57:22.642201] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:20:40.155 [2024-12-11 14:57:22.642224] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:20:40.155 [2024-12-11 14:57:22.642242] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:20:40.155 [2024-12-11 14:57:22.642261] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:20:40.155 [2024-12-11 14:57:22.642281] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:20:40.155 [2024-12-11 14:57:22.642305] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:20:40.155 [2024-12-11 14:57:22.643425] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:20:40.155 [2024-12-11 14:57:22.643453] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:20:40.155 [2024-12-11 14:57:22.643470] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:20:40.155 [2024-12-11 14:57:22.643535] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e01f0 (9): Bad file descriptor 00:20:40.155 [2024-12-11 14:57:22.643570] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e4190 (9): Bad file descriptor 00:20:40.155 [2024-12-11 14:57:22.643590] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1231110 (9): Bad file descriptor 00:20:40.155 [2024-12-11 14:57:22.643608] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x171a250 (9): Bad file descriptor 00:20:40.155 [2024-12-11 14:57:22.643624] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:20:40.155 [2024-12-11 14:57:22.643636] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:20:40.155 [2024-12-11 14:57:22.643650] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:20:40.155 [2024-12-11 14:57:22.643662] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:20:40.155 [2024-12-11 14:57:22.643677] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:20:40.155 [2024-12-11 14:57:22.643689] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:20:40.155 [2024-12-11 14:57:22.643701] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:20:40.155 [2024-12-11 14:57:22.643713] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:20:40.155 [2024-12-11 14:57:22.643726] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:20:40.155 [2024-12-11 14:57:22.643738] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:20:40.156 [2024-12-11 14:57:22.643751] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:20:40.156 [2024-12-11 14:57:22.643763] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:20:40.156 [2024-12-11 14:57:22.644216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.156 [2024-12-11 14:57:22.644246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1720c70 with addr=10.0.0.2, port=4420 00:20:40.156 [2024-12-11 14:57:22.644263] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1720c70 is same with the state(6) to be set 00:20:40.156 [2024-12-11 14:57:22.644340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.156 [2024-12-11 14:57:22.644366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12bd330 with addr=10.0.0.2, port=4420 00:20:40.156 [2024-12-11 14:57:22.644382] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bd330 is same with the state(6) to be set 00:20:40.156 [2024-12-11 14:57:22.644465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:40.156 [2024-12-11 14:57:22.644491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c9320 with addr=10.0.0.2, port=4420 00:20:40.156 [2024-12-11 14:57:22.644512] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c9320 is same with the state(6) to be set 00:20:40.156 [2024-12-11 14:57:22.644528] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:20:40.156 [2024-12-11 14:57:22.644540] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:20:40.156 [2024-12-11 14:57:22.644592] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:20:40.156 [2024-12-11 14:57:22.644607] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:20:40.156 [2024-12-11 14:57:22.644621] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:20:40.156 [2024-12-11 14:57:22.644634] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:20:40.156 [2024-12-11 14:57:22.644646] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:20:40.156 [2024-12-11 14:57:22.644658] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:20:40.156 [2024-12-11 14:57:22.644671] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:20:40.156 [2024-12-11 14:57:22.644683] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:20:40.156 [2024-12-11 14:57:22.644696] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:20:40.156 [2024-12-11 14:57:22.644707] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:20:40.156 [2024-12-11 14:57:22.644720] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:20:40.156 [2024-12-11 14:57:22.644731] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:20:40.156 [2024-12-11 14:57:22.644743] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:20:40.156 [2024-12-11 14:57:22.644755] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:20:40.156 [2024-12-11 14:57:22.644832] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1720c70 (9): Bad file descriptor 00:20:40.156 [2024-12-11 14:57:22.644857] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12bd330 (9): Bad file descriptor 00:20:40.156 [2024-12-11 14:57:22.644875] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c9320 (9): Bad file descriptor 00:20:40.156 [2024-12-11 14:57:22.644916] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:20:40.156 [2024-12-11 14:57:22.644934] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:20:40.156 [2024-12-11 14:57:22.644947] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:20:40.156 [2024-12-11 14:57:22.644960] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:20:40.156 [2024-12-11 14:57:22.644974] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:20:40.156 [2024-12-11 14:57:22.644986] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:20:40.156 [2024-12-11 14:57:22.644998] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:20:40.156 [2024-12-11 14:57:22.645009] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:20:40.156 [2024-12-11 14:57:22.645022] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:20:40.156 [2024-12-11 14:57:22.645040] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:20:40.156 [2024-12-11 14:57:22.645054] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:20:40.156 [2024-12-11 14:57:22.645065] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:20:40.415 14:57:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:20:41.355 14:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 718399 00:20:41.355 14:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:20:41.355 14:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 718399 00:20:41.355 14:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:20:41.355 14:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:41.355 14:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:20:41.355 14:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:41.355 14:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 718399 00:20:41.355 14:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:20:41.355 14:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:41.355 14:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:20:41.355 14:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:20:41.355 14:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:20:41.355 14:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:41.355 14:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:20:41.355 14:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:20:41.356 14:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:41.356 14:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:41.356 14:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:20:41.356 14:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:41.356 14:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:20:41.356 14:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:41.356 14:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:20:41.356 14:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:41.356 14:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:41.356 rmmod nvme_tcp 00:20:41.356 rmmod nvme_fabrics 00:20:41.356 rmmod nvme_keyring 00:20:41.616 14:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:41.616 14:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:20:41.616 14:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:20:41.616 14:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 718223 ']' 00:20:41.616 14:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 718223 00:20:41.616 14:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 718223 ']' 00:20:41.616 14:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 718223 00:20:41.616 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (718223) - No such process 00:20:41.616 14:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 718223 is not found' 00:20:41.616 Process with pid 718223 is not found 00:20:41.616 14:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:41.616 14:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:41.616 14:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:41.616 14:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:20:41.616 14:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:20:41.616 14:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:41.616 14:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:20:41.616 14:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:41.616 14:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:41.616 14:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:41.616 14:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:41.616 14:57:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:43.524 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:43.524 00:20:43.524 real 0m7.223s 00:20:43.524 user 0m17.384s 00:20:43.524 sys 0m1.324s 00:20:43.524 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:43.524 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:43.524 ************************************ 00:20:43.524 END TEST nvmf_shutdown_tc3 00:20:43.524 ************************************ 00:20:43.524 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:20:43.524 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:20:43.524 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:20:43.524 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:43.524 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:43.524 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:43.524 ************************************ 00:20:43.524 START TEST nvmf_shutdown_tc4 00:20:43.524 ************************************ 00:20:43.524 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:20:43.524 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:20:43.524 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:20:43.524 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:43.524 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:43.524 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:43.524 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:43.524 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:43.524 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:43.524 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:43.524 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:43.524 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:43.524 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:43.524 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:20:43.524 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:43.524 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:43.524 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:20:43.524 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:43.524 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:43.524 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:43.524 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:43.524 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:43.524 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:20:43.524 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:43.524 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:20:43.524 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:20:43.524 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:20:43.524 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:20:43.524 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:20:43.524 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:20:43.524 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:43.525 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:43.525 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:43.525 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:43.525 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:43.525 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:43.525 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:43.525 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:43.525 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:43.525 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:43.525 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:43.525 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:43.525 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:43.525 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:43.525 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:43.525 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:43.525 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:43.525 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:43.525 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:43.525 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:43.525 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:43.525 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:43.525 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:43.525 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:43.525 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:43.525 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:43.525 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:43.525 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:43.525 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:43.525 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:43.525 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:43.525 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:43.525 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:43.525 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:43.525 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:43.525 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:43.525 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:43.525 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:43.525 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:43.525 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:43.525 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:43.525 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:43.525 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:43.525 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:43.525 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:43.525 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:43.525 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:43.525 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:43.525 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:43.525 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:43.525 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:43.525 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:43.525 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:43.525 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:43.525 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:43.525 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:43.525 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:43.525 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:43.525 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:20:43.525 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:43.525 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:43.525 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:43.525 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:43.525 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:43.525 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:43.525 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:43.525 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:43.525 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:43.525 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:43.525 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:43.525 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:43.525 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:43.525 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:43.525 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:43.525 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:43.525 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:43.525 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:43.783 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:43.783 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:43.783 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:43.783 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:43.783 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:43.783 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:43.783 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:43.783 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:43.783 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:43.783 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:20:43.783 00:20:43.783 --- 10.0.0.2 ping statistics --- 00:20:43.783 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:43.783 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:20:43.783 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:43.783 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:43.783 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:20:43.783 00:20:43.783 --- 10.0.0.1 ping statistics --- 00:20:43.783 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:43.783 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:20:43.783 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:43.783 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:20:43.783 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:43.783 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:43.783 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:43.783 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:43.783 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:43.783 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:43.783 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:43.783 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:20:43.783 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:43.783 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:43.783 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:43.784 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=719303 00:20:43.784 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:43.784 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 719303 00:20:43.784 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 719303 ']' 00:20:43.784 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:43.784 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:43.784 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:43.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:43.784 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:43.784 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:43.784 [2024-12-11 14:57:26.512383] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:20:43.784 [2024-12-11 14:57:26.512456] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:44.042 [2024-12-11 14:57:26.584682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:44.042 [2024-12-11 14:57:26.641987] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:44.042 [2024-12-11 14:57:26.642040] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:44.042 [2024-12-11 14:57:26.642068] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:44.042 [2024-12-11 14:57:26.642079] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:44.042 [2024-12-11 14:57:26.642088] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:44.042 [2024-12-11 14:57:26.643523] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:20:44.042 [2024-12-11 14:57:26.643583] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:20:44.042 [2024-12-11 14:57:26.643650] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:20:44.042 [2024-12-11 14:57:26.643654] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:20:44.042 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:44.042 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:20:44.042 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:44.042 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:44.042 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:44.042 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:44.042 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:44.042 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.042 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:44.042 [2024-12-11 14:57:26.797958] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:44.042 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.042 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:20:44.042 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:20:44.042 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:44.042 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:44.042 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:44.042 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:44.302 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:44.302 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:44.302 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:44.302 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:44.302 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:44.302 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:44.302 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:44.302 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:44.302 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:44.302 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:44.302 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:44.302 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:44.302 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:44.302 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:44.302 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:44.302 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:44.302 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:44.302 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:44.302 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:44.302 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:20:44.302 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.302 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:44.302 Malloc1 00:20:44.302 [2024-12-11 14:57:26.909860] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:44.302 Malloc2 00:20:44.302 Malloc3 00:20:44.302 Malloc4 00:20:44.562 Malloc5 00:20:44.562 Malloc6 00:20:44.562 Malloc7 00:20:44.562 Malloc8 00:20:44.562 Malloc9 00:20:44.822 Malloc10 00:20:44.822 14:57:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.822 14:57:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:20:44.822 14:57:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:44.822 14:57:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:44.822 14:57:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=719365 00:20:44.822 14:57:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:20:44.822 14:57:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:20:44.822 [2024-12-11 14:57:27.447121] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:50.099 14:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:50.099 14:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 719303 00:20:50.099 14:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 719303 ']' 00:20:50.099 14:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 719303 00:20:50.099 14:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:20:50.099 14:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:50.099 14:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 719303 00:20:50.099 14:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:50.099 14:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:50.099 14:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 719303' 00:20:50.099 killing process with pid 719303 00:20:50.099 14:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 719303 00:20:50.099 14:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 719303 00:20:50.099 [2024-12-11 14:57:32.440964] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa64010 is same with the state(6) to be set 00:20:50.099 [2024-12-11 14:57:32.441058] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa64010 is same with the state(6) to be set 00:20:50.099 [2024-12-11 14:57:32.441470] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa64500 is same with the state(6) to be set 00:20:50.099 [2024-12-11 14:57:32.441503] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa64500 is same with the state(6) to be set 00:20:50.099 [2024-12-11 14:57:32.441518] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa64500 is same with the state(6) to be set 00:20:50.099 [2024-12-11 14:57:32.441531] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa64500 is same with the state(6) to be set 00:20:50.099 [2024-12-11 14:57:32.441552] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa64500 is same with the state(6) to be set 00:20:50.099 [2024-12-11 14:57:32.441567] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa64500 is same with the state(6) to be set 00:20:50.099 [2024-12-11 14:57:32.441579] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa64500 is same with the state(6) to be set 00:20:50.099 [2024-12-11 14:57:32.441592] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa64500 is same with the state(6) to be set 00:20:50.099 [2024-12-11 14:57:32.441614] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa64500 is same with the state(6) to be set 00:20:50.099 [2024-12-11 14:57:32.441625] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa64500 is same with the state(6) to be set 00:20:50.099 [2024-12-11 14:57:32.441637] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa64500 is same with the state(6) to be set 00:20:50.099 [2024-12-11 14:57:32.442863] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63b40 is same with the state(6) to be set 00:20:50.099 [2024-12-11 14:57:32.442897] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63b40 is same with the state(6) to be set 00:20:50.099 [2024-12-11 14:57:32.442913] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63b40 is same with the state(6) to be set 00:20:50.099 [2024-12-11 14:57:32.442950] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63b40 is same with the state(6) to be set 00:20:50.099 [2024-12-11 14:57:32.442963] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63b40 is same with the state(6) to be set 00:20:50.099 [2024-12-11 14:57:32.442974] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63b40 is same with the state(6) to be set 00:20:50.099 Write completed with error (sct=0, sc=8) 00:20:50.099 Write completed with error (sct=0, sc=8) 00:20:50.099 starting I/O failed: -6 00:20:50.099 Write completed with error (sct=0, sc=8) 00:20:50.099 Write completed with error (sct=0, sc=8) 00:20:50.099 Write completed with error (sct=0, sc=8) 00:20:50.099 Write completed with error (sct=0, sc=8) 00:20:50.099 starting I/O failed: -6 00:20:50.099 Write completed with error (sct=0, sc=8) 00:20:50.099 Write completed with error (sct=0, sc=8) 00:20:50.099 [2024-12-11 14:57:32.444507] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63160 is same with the state(6) to be set 00:20:50.099 Write completed with error (sct=0, sc=8) 00:20:50.099 [2024-12-11 14:57:32.444540] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63160 is same with the state(6) to be set 00:20:50.099 Write completed with error (sct=0, sc=8) 00:20:50.099 starting I/O failed: -6 00:20:50.099 [2024-12-11 14:57:32.444569] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63160 is same with the state(6) to be set 00:20:50.099 Write completed with error (sct=0, sc=8) 00:20:50.099 [2024-12-11 14:57:32.444583] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63160 is same with the state(6) to be set 00:20:50.099 [2024-12-11 14:57:32.444606] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63160 is same with the state(6) to be set 00:20:50.099 Write completed with error (sct=0, sc=8) 00:20:50.099 Write completed with error (sct=0, sc=8) 00:20:50.099 Write completed with error (sct=0, sc=8) 00:20:50.099 starting I/O failed: -6 00:20:50.099 Write completed with error (sct=0, sc=8) 00:20:50.099 Write completed with error (sct=0, sc=8) 00:20:50.099 Write completed with error (sct=0, sc=8) 00:20:50.099 Write completed with error (sct=0, sc=8) 00:20:50.099 starting I/O failed: -6 00:20:50.099 Write completed with error (sct=0, sc=8) 00:20:50.099 Write completed with error (sct=0, sc=8) 00:20:50.099 Write completed with error (sct=0, sc=8) 00:20:50.099 Write completed with error (sct=0, sc=8) 00:20:50.099 starting I/O failed: -6 00:20:50.099 Write completed with error (sct=0, sc=8) 00:20:50.099 Write completed with error (sct=0, sc=8) 00:20:50.099 Write completed with error (sct=0, sc=8) 00:20:50.099 Write completed with error (sct=0, sc=8) 00:20:50.099 starting I/O failed: -6 00:20:50.099 Write completed with error (sct=0, sc=8) 00:20:50.099 [2024-12-11 14:57:32.444910] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63650 is same with tWrite completed with error (sct=0, sc=8) 00:20:50.099 he state(6) to be set 00:20:50.099 Write completed with error (sct=0, sc=8) 00:20:50.099 [2024-12-11 14:57:32.444950] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63650 is same with the state(6) to be set 00:20:50.099 Write completed with error (sct=0, sc=8) 00:20:50.099 [2024-12-11 14:57:32.444967] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63650 is same with tstarting I/O failed: -6 00:20:50.099 he state(6) to be set 00:20:50.099 [2024-12-11 14:57:32.444981] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63650 is same with the state(6) to be set 00:20:50.099 Write completed with error (sct=0, sc=8) 00:20:50.099 [2024-12-11 14:57:32.444993] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63650 is same with the state(6) to be set 00:20:50.099 Write completed with error (sct=0, sc=8) 00:20:50.099 [2024-12-11 14:57:32.445006] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63650 is same with the state(6) to be set 00:20:50.099 [2024-12-11 14:57:32.445018] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63650 is same with the state(6) to be set 00:20:50.099 Write completed with error (sct=0, sc=8) 00:20:50.099 [2024-12-11 14:57:32.445030] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa63650 is same with the state(6) to be set 00:20:50.099 Write completed with error (sct=0, sc=8) 00:20:50.099 starting I/O failed: -6 00:20:50.099 Write completed with error (sct=0, sc=8) 00:20:50.099 Write completed with error (sct=0, sc=8) 00:20:50.099 Write completed with error (sct=0, sc=8) 00:20:50.099 Write completed with error (sct=0, sc=8) 00:20:50.099 starting I/O failed: -6 00:20:50.099 [2024-12-11 14:57:32.445181] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:50.099 Write completed with error (sct=0, sc=8) 00:20:50.099 Write completed with error (sct=0, sc=8) 00:20:50.099 Write completed with error (sct=0, sc=8) 00:20:50.099 starting I/O failed: -6 00:20:50.099 Write completed with error (sct=0, sc=8) 00:20:50.099 starting I/O failed: -6 00:20:50.099 Write completed with error (sct=0, sc=8) 00:20:50.099 Write completed with error (sct=0, sc=8) 00:20:50.099 Write completed with error (sct=0, sc=8) 00:20:50.099 starting I/O failed: -6 00:20:50.099 Write completed with error (sct=0, sc=8) 00:20:50.099 starting I/O failed: -6 00:20:50.099 Write completed with error (sct=0, sc=8) 00:20:50.099 Write completed with error (sct=0, sc=8) 00:20:50.099 Write completed with error (sct=0, sc=8) 00:20:50.099 starting I/O failed: -6 00:20:50.099 Write completed with error (sct=0, sc=8) 00:20:50.099 starting I/O failed: -6 00:20:50.099 Write completed with error (sct=0, sc=8) 00:20:50.099 Write completed with error (sct=0, sc=8) 00:20:50.099 Write completed with error (sct=0, sc=8) 00:20:50.099 starting I/O failed: -6 00:20:50.099 Write completed with error (sct=0, sc=8) 00:20:50.099 starting I/O failed: -6 00:20:50.099 Write completed with error (sct=0, sc=8) 00:20:50.099 Write completed with error (sct=0, sc=8) 00:20:50.099 Write completed with error (sct=0, sc=8) 00:20:50.099 starting I/O failed: -6 00:20:50.099 Write completed with error (sct=0, sc=8) 00:20:50.099 starting I/O failed: -6 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 starting I/O failed: -6 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 starting I/O failed: -6 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 starting I/O failed: -6 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 starting I/O failed: -6 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 starting I/O failed: -6 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 starting I/O failed: -6 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 starting I/O failed: -6 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 starting I/O failed: -6 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 starting I/O failed: -6 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 starting I/O failed: -6 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 starting I/O failed: -6 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 starting I/O failed: -6 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 [2024-12-11 14:57:32.446329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 starting I/O failed: -6 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 starting I/O failed: -6 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 starting I/O failed: -6 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 starting I/O failed: -6 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 starting I/O failed: -6 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 starting I/O failed: -6 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 starting I/O failed: -6 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 starting I/O failed: -6 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 starting I/O failed: -6 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 starting I/O failed: -6 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 starting I/O failed: -6 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 starting I/O failed: -6 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 starting I/O failed: -6 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 starting I/O failed: -6 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 starting I/O failed: -6 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 starting I/O failed: -6 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 starting I/O failed: -6 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 starting I/O failed: -6 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 starting I/O failed: -6 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 starting I/O failed: -6 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 starting I/O failed: -6 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 starting I/O failed: -6 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 starting I/O failed: -6 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 starting I/O failed: -6 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 starting I/O failed: -6 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 starting I/O failed: -6 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 starting I/O failed: -6 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 starting I/O failed: -6 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 starting I/O failed: -6 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 starting I/O failed: -6 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 starting I/O failed: -6 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 starting I/O failed: -6 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 starting I/O failed: -6 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 starting I/O failed: -6 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 starting I/O failed: -6 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 [2024-12-11 14:57:32.447497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:50.100 starting I/O failed: -6 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 starting I/O failed: -6 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 starting I/O failed: -6 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 starting I/O failed: -6 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 starting I/O failed: -6 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 starting I/O failed: -6 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 starting I/O failed: -6 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 starting I/O failed: -6 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 starting I/O failed: -6 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 starting I/O failed: -6 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 starting I/O failed: -6 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 starting I/O failed: -6 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 starting I/O failed: -6 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 starting I/O failed: -6 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 starting I/O failed: -6 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 starting I/O failed: -6 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 starting I/O failed: -6 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 starting I/O failed: -6 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 starting I/O failed: -6 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 starting I/O failed: -6 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 starting I/O failed: -6 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 starting I/O failed: -6 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 starting I/O failed: -6 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 starting I/O failed: -6 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 starting I/O failed: -6 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 starting I/O failed: -6 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 starting I/O failed: -6 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 starting I/O failed: -6 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 starting I/O failed: -6 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 starting I/O failed: -6 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 starting I/O failed: -6 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 starting I/O failed: -6 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 starting I/O failed: -6 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 starting I/O failed: -6 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 starting I/O failed: -6 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 starting I/O failed: -6 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 starting I/O failed: -6 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 starting I/O failed: -6 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 starting I/O failed: -6 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 starting I/O failed: -6 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 starting I/O failed: -6 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 starting I/O failed: -6 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 starting I/O failed: -6 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 starting I/O failed: -6 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 starting I/O failed: -6 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 starting I/O failed: -6 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 starting I/O failed: -6 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 starting I/O failed: -6 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 starting I/O failed: -6 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 starting I/O failed: -6 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 starting I/O failed: -6 00:20:50.100 Write completed with error (sct=0, sc=8) 00:20:50.100 starting I/O failed: -6 00:20:50.101 Write completed with error (sct=0, sc=8) 00:20:50.101 starting I/O failed: -6 00:20:50.101 Write completed with error (sct=0, sc=8) 00:20:50.101 starting I/O failed: -6 00:20:50.101 Write completed with error (sct=0, sc=8) 00:20:50.101 starting I/O failed: -6 00:20:50.101 Write completed with error (sct=0, sc=8) 00:20:50.101 starting I/O failed: -6 00:20:50.101 Write completed with error (sct=0, sc=8) 00:20:50.101 starting I/O failed: -6 00:20:50.101 Write completed with error (sct=0, sc=8) 00:20:50.101 starting I/O failed: -6 00:20:50.101 Write completed with error (sct=0, sc=8) 00:20:50.101 starting I/O failed: -6 00:20:50.101 Write completed with error (sct=0, sc=8) 00:20:50.101 starting I/O failed: -6 00:20:50.101 Write completed with error (sct=0, sc=8) 00:20:50.101 starting I/O failed: -6 00:20:50.101 [2024-12-11 14:57:32.449299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:20:50.101 NVMe io qpair process completion error 00:20:50.101 [2024-12-11 14:57:32.455037] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c4070 is same with the state(6) to be set 00:20:50.101 [2024-12-11 14:57:32.455091] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c4070 is same with the state(6) to be set 00:20:50.101 [2024-12-11 14:57:32.455117] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c4070 is same with the state(6) to be set 00:20:50.101 [2024-12-11 14:57:32.455132] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c4070 is same with the state(6) to be set 00:20:50.101 [2024-12-11 14:57:32.455145] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c4070 is same with the state(6) to be set 00:20:50.101 [2024-12-11 14:57:32.455157] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c4070 is same with the state(6) to be set 00:20:50.101 Write completed with error (sct=0, sc=8) 00:20:50.101 Write completed with error (sct=0, sc=8) 00:20:50.101 Write completed with error (sct=0, sc=8) 00:20:50.101 starting I/O failed: -6 00:20:50.101 Write completed with error (sct=0, sc=8) 00:20:50.101 Write completed with error (sct=0, sc=8) 00:20:50.101 Write completed with error (sct=0, sc=8) 00:20:50.101 Write completed with error (sct=0, sc=8) 00:20:50.101 starting I/O failed: -6 00:20:50.101 Write completed with error (sct=0, sc=8) 00:20:50.101 Write completed with error (sct=0, sc=8) 00:20:50.101 Write completed with error (sct=0, sc=8) 00:20:50.101 Write completed with error (sct=0, sc=8) 00:20:50.101 starting I/O failed: -6 00:20:50.101 Write completed with error (sct=0, sc=8) 00:20:50.101 Write completed with error (sct=0, sc=8) 00:20:50.101 Write completed with error (sct=0, sc=8) 00:20:50.101 [2024-12-11 14:57:32.455786] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c4540 is same with tWrite completed with error (sct=0, sc=8) 00:20:50.101 he state(6) to be set 00:20:50.101 starting I/O failed: -6 00:20:50.101 [2024-12-11 14:57:32.455820] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c4540 is same with tWrite completed with error (sct=0, sc=8) 00:20:50.101 he state(6) to be set 00:20:50.101 [2024-12-11 14:57:32.455837] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c4540 is same with tWrite completed with error (sct=0, sc=8) 00:20:50.101 he state(6) to be set 00:20:50.101 [2024-12-11 14:57:32.455861] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c4540 is same with tWrite completed with error (sct=0, sc=8) 00:20:50.101 he state(6) to be set 00:20:50.101 [2024-12-11 14:57:32.455888] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c4540 is same with the state(6) to be set 00:20:50.101 Write completed with error (sct=0, sc=8) 00:20:50.101 starting I/O failed: -6 00:20:50.101 [2024-12-11 14:57:32.455901] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c4540 is same with the state(6) to be set 00:20:50.101 [2024-12-11 14:57:32.455914] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c4540 is same with tWrite completed with error (sct=0, sc=8) 00:20:50.101 he state(6) to be set 00:20:50.101 [2024-12-11 14:57:32.455927] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c4540 is same with the state(6) to be set 00:20:50.101 Write completed with error (sct=0, sc=8) 00:20:50.101 Write completed with error (sct=0, sc=8) 00:20:50.101 Write completed with error (sct=0, sc=8) 00:20:50.101 starting I/O failed: -6 00:20:50.101 Write completed with error (sct=0, sc=8) 00:20:50.101 Write completed with error (sct=0, sc=8) 00:20:50.101 Write completed with error (sct=0, sc=8) 00:20:50.101 Write completed with error (sct=0, sc=8) 00:20:50.101 starting I/O failed: -6 00:20:50.101 Write completed with error (sct=0, sc=8) 00:20:50.101 Write completed with error (sct=0, sc=8) 00:20:50.101 Write completed with error (sct=0, sc=8) 00:20:50.101 Write completed with error (sct=0, sc=8) 00:20:50.101 starting I/O failed: -6 00:20:50.101 Write completed with error (sct=0, sc=8) 00:20:50.101 [2024-12-11 14:57:32.456151] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c3ba0 is same with tWrite completed with error (sct=0, sc=8) 00:20:50.101 he state(6) to be set 00:20:50.101 Write completed with error (sct=0, sc=8) 00:20:50.101 [2024-12-11 14:57:32.456185] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c3ba0 is same with the state(6) to be set 00:20:50.101 [2024-12-11 14:57:32.456201] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c3ba0 is same with tWrite completed with error (sct=0, sc=8) 00:20:50.101 he state(6) to be set 00:20:50.101 starting I/O failed: -6 00:20:50.101 Write completed with error (sct=0, sc=8) 00:20:50.101 Write completed with error (sct=0, sc=8) 00:20:50.101 Write completed with error (sct=0, sc=8) 00:20:50.101 Write completed with error (sct=0, sc=8) 00:20:50.101 starting I/O failed: -6 00:20:50.101 [2024-12-11 14:57:32.456332] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:50.101 Write completed with error (sct=0, sc=8) 00:20:50.101 starting I/O failed: -6 00:20:50.101 Write completed with error (sct=0, sc=8) 00:20:50.101 Write completed with error (sct=0, sc=8) 00:20:50.101 Write completed with error (sct=0, sc=8) 00:20:50.101 starting I/O failed: -6 00:20:50.101 Write completed with error (sct=0, sc=8) 00:20:50.101 starting I/O failed: -6 00:20:50.101 Write completed with error (sct=0, sc=8) 00:20:50.101 Write completed with error (sct=0, sc=8) 00:20:50.101 Write completed with error (sct=0, sc=8) 00:20:50.101 starting I/O failed: -6 00:20:50.101 Write completed with error (sct=0, sc=8) 00:20:50.101 starting I/O failed: -6 00:20:50.101 Write completed with error (sct=0, sc=8) 00:20:50.101 Write completed with error (sct=0, sc=8) 00:20:50.101 Write completed with error (sct=0, sc=8) 00:20:50.101 starting I/O failed: -6 00:20:50.101 Write completed with error (sct=0, sc=8) 00:20:50.101 starting I/O failed: -6 00:20:50.101 Write completed with error (sct=0, sc=8) 00:20:50.101 Write completed with error (sct=0, sc=8) 00:20:50.101 Write completed with error (sct=0, sc=8) 00:20:50.101 starting I/O failed: -6 00:20:50.101 Write completed with error (sct=0, sc=8) 00:20:50.101 starting I/O failed: -6 00:20:50.101 Write completed with error (sct=0, sc=8) 00:20:50.101 Write completed with error (sct=0, sc=8) 00:20:50.101 Write completed with error (sct=0, sc=8) 00:20:50.101 starting I/O failed: -6 00:20:50.101 [2024-12-11 14:57:32.456877] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67a50 is same with the state(6) to be set 00:20:50.101 Write completed with error (sct=0, sc=8) 00:20:50.101 [2024-12-11 14:57:32.456903] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67a50 is same with the state(6) to be set 00:20:50.101 starting I/O failed: -6 00:20:50.101 [2024-12-11 14:57:32.456916] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67a50 is same with the state(6) to be set 00:20:50.101 Write completed with error (sct=0, sc=8) 00:20:50.101 [2024-12-11 14:57:32.456929] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67a50 is same with the state(6) to be set 00:20:50.101 Write completed with error (sct=0, sc=8) 00:20:50.101 [2024-12-11 14:57:32.456956] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67a50 is same with the state(6) to be set 00:20:50.101 Write completed with error (sct=0, sc=8) 00:20:50.101 [2024-12-11 14:57:32.456969] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67a50 is same with the state(6) to be set 00:20:50.101 starting I/O failed: -6 00:20:50.101 Write completed with error (sct=0, sc=8) 00:20:50.101 starting I/O failed: -6 00:20:50.101 Write completed with error (sct=0, sc=8) 00:20:50.101 Write completed with error (sct=0, sc=8) 00:20:50.101 Write completed with error (sct=0, sc=8) 00:20:50.101 starting I/O failed: -6 00:20:50.101 Write completed with error (sct=0, sc=8) 00:20:50.101 starting I/O failed: -6 00:20:50.101 Write completed with error (sct=0, sc=8) 00:20:50.101 Write completed with error (sct=0, sc=8) 00:20:50.101 Write completed with error (sct=0, sc=8) 00:20:50.101 starting I/O failed: -6 00:20:50.101 Write completed with error (sct=0, sc=8) 00:20:50.101 starting I/O failed: -6 00:20:50.101 Write completed with error (sct=0, sc=8) 00:20:50.101 Write completed with error (sct=0, sc=8) 00:20:50.101 Write completed with error (sct=0, sc=8) 00:20:50.101 starting I/O failed: -6 00:20:50.101 Write completed with error (sct=0, sc=8) 00:20:50.101 starting I/O failed: -6 00:20:50.101 Write completed with error (sct=0, sc=8) 00:20:50.101 Write completed with error (sct=0, sc=8) 00:20:50.101 Write completed with error (sct=0, sc=8) 00:20:50.101 starting I/O failed: -6 00:20:50.101 Write completed with error (sct=0, sc=8) 00:20:50.101 starting I/O failed: -6 00:20:50.101 Write completed with error (sct=0, sc=8) 00:20:50.101 Write completed with error (sct=0, sc=8) 00:20:50.101 Write completed with error (sct=0, sc=8) 00:20:50.101 starting I/O failed: -6 00:20:50.101 Write completed with error (sct=0, sc=8) 00:20:50.101 starting I/O failed: -6 00:20:50.101 [2024-12-11 14:57:32.457425] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:50.101 Write completed with error (sct=0, sc=8) 00:20:50.101 starting I/O failed: -6 00:20:50.101 Write completed with error (sct=0, sc=8) 00:20:50.101 starting I/O failed: -6 00:20:50.101 Write completed with error (sct=0, sc=8) 00:20:50.101 starting I/O failed: -6 00:20:50.101 Write completed with error (sct=0, sc=8) 00:20:50.101 Write completed with error (sct=0, sc=8) 00:20:50.101 starting I/O failed: -6 00:20:50.101 Write completed with error (sct=0, sc=8) 00:20:50.101 starting I/O failed: -6 00:20:50.101 Write completed with error (sct=0, sc=8) 00:20:50.101 starting I/O failed: -6 00:20:50.101 Write completed with error (sct=0, sc=8) 00:20:50.101 Write completed with error (sct=0, sc=8) 00:20:50.101 starting I/O failed: -6 00:20:50.101 Write completed with error (sct=0, sc=8) 00:20:50.101 starting I/O failed: -6 00:20:50.101 Write completed with error (sct=0, sc=8) 00:20:50.102 starting I/O failed: -6 00:20:50.102 Write completed with error (sct=0, sc=8) 00:20:50.102 Write completed with error (sct=0, sc=8) 00:20:50.102 [2024-12-11 14:57:32.457823] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c3820 is same with the state(6) to be set 00:20:50.102 starting I/O failed: -6 00:20:50.102 Write completed with error (sct=0, sc=8) 00:20:50.102 [2024-12-11 14:57:32.457861] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c3820 is same with the state(6) to be set 00:20:50.102 starting I/O failed: -6 00:20:50.102 [2024-12-11 14:57:32.457877] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c3820 is same with the state(6) to be set 00:20:50.102 Write completed with error (sct=0, sc=8) 00:20:50.102 starting I/O failed: -6 00:20:50.102 [2024-12-11 14:57:32.457891] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c3820 is same with the state(6) to be set 00:20:50.102 Write completed with error (sct=0, sc=8) 00:20:50.102 [2024-12-11 14:57:32.457903] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c3820 is same with the state(6) to be set 00:20:50.102 [2024-12-11 14:57:32.457917] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c3820 is same with the state(6) to be set 00:20:50.102 Write completed with error (sct=0, sc=8) 00:20:50.102 starting I/O failed: -6 00:20:50.102 Write completed with error (sct=0, sc=8) 00:20:50.102 starting I/O failed: -6 00:20:50.102 Write completed with error (sct=0, sc=8) 00:20:50.102 starting I/O failed: -6 00:20:50.102 Write completed with error (sct=0, sc=8) 00:20:50.102 Write completed with error (sct=0, sc=8) 00:20:50.102 starting I/O failed: -6 00:20:50.102 Write completed with error (sct=0, sc=8) 00:20:50.102 starting I/O failed: -6 00:20:50.102 Write completed with error (sct=0, sc=8) 00:20:50.102 starting I/O failed: -6 00:20:50.102 Write completed with error (sct=0, sc=8) 00:20:50.102 Write completed with error (sct=0, sc=8) 00:20:50.102 starting I/O failed: -6 00:20:50.102 Write completed with error (sct=0, sc=8) 00:20:50.102 starting I/O failed: -6 00:20:50.102 Write completed with error (sct=0, sc=8) 00:20:50.102 starting I/O failed: -6 00:20:50.102 Write completed with error (sct=0, sc=8) 00:20:50.102 Write completed with error (sct=0, sc=8) 00:20:50.102 starting I/O failed: -6 00:20:50.102 Write completed with error (sct=0, sc=8) 00:20:50.102 starting I/O failed: -6 00:20:50.102 Write completed with error (sct=0, sc=8) 00:20:50.102 starting I/O failed: -6 00:20:50.102 Write completed with error (sct=0, sc=8) 00:20:50.102 Write completed with error (sct=0, sc=8) 00:20:50.102 starting I/O failed: -6 00:20:50.102 Write completed with error (sct=0, sc=8) 00:20:50.102 starting I/O failed: -6 00:20:50.102 Write completed with error (sct=0, sc=8) 00:20:50.102 [2024-12-11 14:57:32.458287] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67580 is same with the state(6) to be set 00:20:50.102 starting I/O failed: -6 00:20:50.102 Write completed with error (sct=0, sc=8) 00:20:50.102 [2024-12-11 14:57:32.458314] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67580 is same with the state(6) to be set 00:20:50.102 [2024-12-11 14:57:32.458328] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67580 is same with the state(6) to be set 00:20:50.102 Write completed with error (sct=0, sc=8) 00:20:50.102 starting I/O failed: -6 00:20:50.102 [2024-12-11 14:57:32.458340] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67580 is same with the state(6) to be set 00:20:50.102 [2024-12-11 14:57:32.458352] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67580 is same with tWrite completed with error (sct=0, sc=8) 00:20:50.102 he state(6) to be set 00:20:50.102 starting I/O failed: -6 00:20:50.102 [2024-12-11 14:57:32.458365] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67580 is same with the state(6) to be set 00:20:50.102 Write completed with error (sct=0, sc=8) 00:20:50.102 [2024-12-11 14:57:32.458377] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67580 is same with the state(6) to be set 00:20:50.102 starting I/O failed: -6 00:20:50.102 [2024-12-11 14:57:32.458389] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67580 is same with the state(6) to be set 00:20:50.102 Write completed with error (sct=0, sc=8) 00:20:50.102 [2024-12-11 14:57:32.458401] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67580 is same with the state(6) to be set 00:20:50.102 [2024-12-11 14:57:32.458413] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa67580 is same with the state(6) to be set 00:20:50.102 Write completed with error (sct=0, sc=8) 00:20:50.102 starting I/O failed: -6 00:20:50.102 Write completed with error (sct=0, sc=8) 00:20:50.102 starting I/O failed: -6 00:20:50.102 Write completed with error (sct=0, sc=8) 00:20:50.102 starting I/O failed: -6 00:20:50.102 Write completed with error (sct=0, sc=8) 00:20:50.102 Write completed with error (sct=0, sc=8) 00:20:50.102 starting I/O failed: -6 00:20:50.102 Write completed with error (sct=0, sc=8) 00:20:50.102 starting I/O failed: -6 00:20:50.102 Write completed with error (sct=0, sc=8) 00:20:50.102 starting I/O failed: -6 00:20:50.102 [2024-12-11 14:57:32.458585] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:50.102 Write completed with error (sct=0, sc=8) 00:20:50.102 starting I/O failed: -6 00:20:50.102 Write completed with error (sct=0, sc=8) 00:20:50.102 starting I/O failed: -6 00:20:50.102 Write completed with error (sct=0, sc=8) 00:20:50.102 starting I/O failed: -6 00:20:50.102 Write completed with error (sct=0, sc=8) 00:20:50.102 starting I/O failed: -6 00:20:50.102 Write completed with error (sct=0, sc=8) 00:20:50.102 starting I/O failed: -6 00:20:50.102 Write completed with error (sct=0, sc=8) 00:20:50.102 starting I/O failed: -6 00:20:50.102 Write completed with error (sct=0, sc=8) 00:20:50.102 starting I/O failed: -6 00:20:50.102 Write completed with error (sct=0, sc=8) 00:20:50.102 starting I/O failed: -6 00:20:50.102 Write completed with error (sct=0, sc=8) 00:20:50.102 starting I/O failed: -6 00:20:50.102 Write completed with error (sct=0, sc=8) 00:20:50.102 starting I/O failed: -6 00:20:50.102 Write completed with error (sct=0, sc=8) 00:20:50.102 starting I/O failed: -6 00:20:50.102 Write completed with error (sct=0, sc=8) 00:20:50.102 starting I/O failed: -6 00:20:50.102 Write completed with error (sct=0, sc=8) 00:20:50.102 starting I/O failed: -6 00:20:50.102 Write completed with error (sct=0, sc=8) 00:20:50.102 starting I/O failed: -6 00:20:50.102 Write completed with error (sct=0, sc=8) 00:20:50.102 starting I/O failed: -6 00:20:50.102 Write completed with error (sct=0, sc=8) 00:20:50.102 starting I/O failed: -6 00:20:50.102 Write completed with error (sct=0, sc=8) 00:20:50.102 starting I/O failed: -6 00:20:50.102 Write completed with error (sct=0, sc=8) 00:20:50.102 starting I/O failed: -6 00:20:50.102 Write completed with error (sct=0, sc=8) 00:20:50.102 starting I/O failed: -6 00:20:50.102 Write completed with error (sct=0, sc=8) 00:20:50.102 starting I/O failed: -6 00:20:50.102 Write completed with error (sct=0, sc=8) 00:20:50.102 starting I/O failed: -6 00:20:50.102 Write completed with error (sct=0, sc=8) 00:20:50.102 starting I/O failed: -6 00:20:50.102 Write completed with error (sct=0, sc=8) 00:20:50.102 starting I/O failed: -6 00:20:50.102 Write completed with error (sct=0, sc=8) 00:20:50.102 starting I/O failed: -6 00:20:50.102 Write completed with error (sct=0, sc=8) 00:20:50.102 starting I/O failed: -6 00:20:50.102 Write completed with error (sct=0, sc=8) 00:20:50.102 starting I/O failed: -6 00:20:50.102 Write completed with error (sct=0, sc=8) 00:20:50.102 starting I/O failed: -6 00:20:50.102 Write completed with error (sct=0, sc=8) 00:20:50.102 starting I/O failed: -6 00:20:50.102 Write completed with error (sct=0, sc=8) 00:20:50.102 starting I/O failed: -6 00:20:50.102 Write completed with error (sct=0, sc=8) 00:20:50.102 starting I/O failed: -6 00:20:50.102 Write completed with error (sct=0, sc=8) 00:20:50.102 starting I/O failed: -6 00:20:50.102 Write completed with error (sct=0, sc=8) 00:20:50.102 starting I/O failed: -6 00:20:50.102 Write completed with error (sct=0, sc=8) 00:20:50.102 starting I/O failed: -6 00:20:50.102 Write completed with error (sct=0, sc=8) 00:20:50.102 starting I/O failed: -6 00:20:50.102 Write completed with error (sct=0, sc=8) 00:20:50.102 starting I/O failed: -6 00:20:50.102 Write completed with error (sct=0, sc=8) 00:20:50.102 starting I/O failed: -6 00:20:50.102 Write completed with error (sct=0, sc=8) 00:20:50.102 starting I/O failed: -6 00:20:50.102 Write completed with error (sct=0, sc=8) 00:20:50.102 starting I/O failed: -6 00:20:50.102 Write completed with error (sct=0, sc=8) 00:20:50.102 starting I/O failed: -6 00:20:50.102 Write completed with error (sct=0, sc=8) 00:20:50.102 starting I/O failed: -6 00:20:50.102 Write completed with error (sct=0, sc=8) 00:20:50.102 starting I/O failed: -6 00:20:50.102 Write completed with error (sct=0, sc=8) 00:20:50.102 starting I/O failed: -6 00:20:50.102 Write completed with error (sct=0, sc=8) 00:20:50.102 starting I/O failed: -6 00:20:50.102 Write completed with error (sct=0, sc=8) 00:20:50.102 starting I/O failed: -6 00:20:50.102 Write completed with error (sct=0, sc=8) 00:20:50.102 starting I/O failed: -6 00:20:50.102 Write completed with error (sct=0, sc=8) 00:20:50.102 starting I/O failed: -6 00:20:50.102 Write completed with error (sct=0, sc=8) 00:20:50.102 starting I/O failed: -6 00:20:50.102 Write completed with error (sct=0, sc=8) 00:20:50.102 starting I/O failed: -6 00:20:50.102 Write completed with error (sct=0, sc=8) 00:20:50.102 starting I/O failed: -6 00:20:50.102 Write completed with error (sct=0, sc=8) 00:20:50.102 starting I/O failed: -6 00:20:50.102 Write completed with error (sct=0, sc=8) 00:20:50.102 starting I/O failed: -6 00:20:50.102 Write completed with error (sct=0, sc=8) 00:20:50.102 starting I/O failed: -6 00:20:50.102 Write completed with error (sct=0, sc=8) 00:20:50.102 starting I/O failed: -6 00:20:50.102 Write completed with error (sct=0, sc=8) 00:20:50.102 starting I/O failed: -6 00:20:50.102 Write completed with error (sct=0, sc=8) 00:20:50.102 starting I/O failed: -6 00:20:50.102 Write completed with error (sct=0, sc=8) 00:20:50.102 starting I/O failed: -6 00:20:50.102 Write completed with error (sct=0, sc=8) 00:20:50.102 starting I/O failed: -6 00:20:50.102 Write completed with error (sct=0, sc=8) 00:20:50.102 starting I/O failed: -6 00:20:50.102 Write completed with error (sct=0, sc=8) 00:20:50.102 starting I/O failed: -6 00:20:50.103 [2024-12-11 14:57:32.460237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:20:50.103 NVMe io qpair process completion error 00:20:50.103 Write completed with error (sct=0, sc=8) 00:20:50.103 Write completed with error (sct=0, sc=8) 00:20:50.103 starting I/O failed: -6 00:20:50.103 Write completed with error (sct=0, sc=8) 00:20:50.103 Write completed with error (sct=0, sc=8) 00:20:50.103 Write completed with error (sct=0, sc=8) 00:20:50.103 Write completed with error (sct=0, sc=8) 00:20:50.103 starting I/O failed: -6 00:20:50.103 Write completed with error (sct=0, sc=8) 00:20:50.103 Write completed with error (sct=0, sc=8) 00:20:50.103 Write completed with error (sct=0, sc=8) 00:20:50.103 Write completed with error (sct=0, sc=8) 00:20:50.103 starting I/O failed: -6 00:20:50.103 Write completed with error (sct=0, sc=8) 00:20:50.103 Write completed with error (sct=0, sc=8) 00:20:50.103 Write completed with error (sct=0, sc=8) 00:20:50.103 Write completed with error (sct=0, sc=8) 00:20:50.103 starting I/O failed: -6 00:20:50.103 Write completed with error (sct=0, sc=8) 00:20:50.103 Write completed with error (sct=0, sc=8) 00:20:50.103 Write completed with error (sct=0, sc=8) 00:20:50.103 Write completed with error (sct=0, sc=8) 00:20:50.103 starting I/O failed: -6 00:20:50.103 Write completed with error (sct=0, sc=8) 00:20:50.103 Write completed with error (sct=0, sc=8) 00:20:50.103 Write completed with error (sct=0, sc=8) 00:20:50.103 Write completed with error (sct=0, sc=8) 00:20:50.103 starting I/O failed: -6 00:20:50.103 Write completed with error (sct=0, sc=8) 00:20:50.103 [2024-12-11 14:57:32.461218] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c6220 is same with tWrite completed with error (sct=0, sc=8) 00:20:50.103 he state(6) to be set 00:20:50.103 Write completed with error (sct=0, sc=8) 00:20:50.103 [2024-12-11 14:57:32.461260] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c6220 is same with the state(6) to be set 00:20:50.103 [2024-12-11 14:57:32.461275] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c6220 is same with the state(6) to be set 00:20:50.103 Write completed with error (sct=0, sc=8) 00:20:50.103 starting I/O failed: -6 00:20:50.103 [2024-12-11 14:57:32.461287] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c6220 is same with the state(6) to be set 00:20:50.103 [2024-12-11 14:57:32.461299] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c6220 is same with tWrite completed with error (sct=0, sc=8) 00:20:50.103 he state(6) to be set 00:20:50.103 [2024-12-11 14:57:32.461313] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c6220 is same with the state(6) to be set 00:20:50.103 Write completed with error (sct=0, sc=8) 00:20:50.103 [2024-12-11 14:57:32.461324] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c6220 is same with the state(6) to be set 00:20:50.103 [2024-12-11 14:57:32.461336] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c6220 is same with tWrite completed with error (sct=0, sc=8) 00:20:50.103 he state(6) to be set 00:20:50.103 [2024-12-11 14:57:32.461349] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c6220 is same with the state(6) to be set 00:20:50.103 Write completed with error (sct=0, sc=8) 00:20:50.103 [2024-12-11 14:57:32.461361] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c6220 is same with the state(6) to be set 00:20:50.103 starting I/O failed: -6 00:20:50.103 Write completed with error (sct=0, sc=8) 00:20:50.103 Write completed with error (sct=0, sc=8) 00:20:50.103 Write completed with error (sct=0, sc=8) 00:20:50.103 Write completed with error (sct=0, sc=8) 00:20:50.103 starting I/O failed: -6 00:20:50.103 Write completed with error (sct=0, sc=8) 00:20:50.103 Write completed with error (sct=0, sc=8) 00:20:50.103 Write completed with error (sct=0, sc=8) 00:20:50.103 Write completed with error (sct=0, sc=8) 00:20:50.103 starting I/O failed: -6 00:20:50.103 Write completed with error (sct=0, sc=8) 00:20:50.103 [2024-12-11 14:57:32.461573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:50.103 Write completed with error (sct=0, sc=8) 00:20:50.103 Write completed with error (sct=0, sc=8) 00:20:50.103 starting I/O failed: -6 00:20:50.103 Write completed with error (sct=0, sc=8) 00:20:50.103 starting I/O failed: -6 00:20:50.103 Write completed with error (sct=0, sc=8) 00:20:50.103 Write completed with error (sct=0, sc=8) 00:20:50.103 Write completed with error (sct=0, sc=8) 00:20:50.103 starting I/O failed: -6 00:20:50.103 Write completed with error (sct=0, sc=8) 00:20:50.103 starting I/O failed: -6 00:20:50.103 Write completed with error (sct=0, sc=8) 00:20:50.103 Write completed with error (sct=0, sc=8) 00:20:50.103 Write completed with error (sct=0, sc=8) 00:20:50.103 starting I/O failed: -6 00:20:50.103 Write completed with error (sct=0, sc=8) 00:20:50.103 starting I/O failed: -6 00:20:50.103 Write completed with error (sct=0, sc=8) 00:20:50.103 Write completed with error (sct=0, sc=8) 00:20:50.103 Write completed with error (sct=0, sc=8) 00:20:50.103 starting I/O failed: -6 00:20:50.103 Write completed with error (sct=0, sc=8) 00:20:50.103 starting I/O failed: -6 00:20:50.103 Write completed with error (sct=0, sc=8) 00:20:50.103 Write completed with error (sct=0, sc=8) 00:20:50.103 Write completed with error (sct=0, sc=8) 00:20:50.103 starting I/O failed: -6 00:20:50.103 Write completed with error (sct=0, sc=8) 00:20:50.103 starting I/O failed: -6 00:20:50.103 Write completed with error (sct=0, sc=8) 00:20:50.103 Write completed with error (sct=0, sc=8) 00:20:50.103 Write completed with error (sct=0, sc=8) 00:20:50.103 starting I/O failed: -6 00:20:50.103 Write completed with error (sct=0, sc=8) 00:20:50.103 starting I/O failed: -6 00:20:50.103 Write completed with error (sct=0, sc=8) 00:20:50.103 Write completed with error (sct=0, sc=8) 00:20:50.103 Write completed with error (sct=0, sc=8) 00:20:50.103 starting I/O failed: -6 00:20:50.103 Write completed with error (sct=0, sc=8) 00:20:50.103 starting I/O failed: -6 00:20:50.103 Write completed with error (sct=0, sc=8) 00:20:50.103 Write completed with error (sct=0, sc=8) 00:20:50.103 Write completed with error (sct=0, sc=8) 00:20:50.103 starting I/O failed: -6 00:20:50.103 Write completed with error (sct=0, sc=8) 00:20:50.103 starting I/O failed: -6 00:20:50.103 Write completed with error (sct=0, sc=8) 00:20:50.103 Write completed with error (sct=0, sc=8) 00:20:50.103 Write completed with error (sct=0, sc=8) 00:20:50.103 starting I/O failed: -6 00:20:50.103 Write completed with error (sct=0, sc=8) 00:20:50.103 starting I/O failed: -6 00:20:50.103 Write completed with error (sct=0, sc=8) 00:20:50.103 Write completed with error (sct=0, sc=8) 00:20:50.103 Write completed with error (sct=0, sc=8) 00:20:50.103 starting I/O failed: -6 00:20:50.103 Write completed with error (sct=0, sc=8) 00:20:50.103 starting I/O failed: -6 00:20:50.103 Write completed with error (sct=0, sc=8) 00:20:50.103 Write completed with error (sct=0, sc=8) 00:20:50.103 Write completed with error (sct=0, sc=8) 00:20:50.103 starting I/O failed: -6 00:20:50.103 Write completed with error (sct=0, sc=8) 00:20:50.103 starting I/O failed: -6 00:20:50.103 Write completed with error (sct=0, sc=8) 00:20:50.103 [2024-12-11 14:57:32.462654] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:50.103 Write completed with error (sct=0, sc=8) 00:20:50.103 starting I/O failed: -6 00:20:50.103 Write completed with error (sct=0, sc=8) 00:20:50.103 starting I/O failed: -6 00:20:50.103 Write completed with error (sct=0, sc=8) 00:20:50.103 starting I/O failed: -6 00:20:50.103 Write completed with error (sct=0, sc=8) 00:20:50.103 Write completed with error (sct=0, sc=8) 00:20:50.103 starting I/O failed: -6 00:20:50.103 Write completed with error (sct=0, sc=8) 00:20:50.103 starting I/O failed: -6 00:20:50.103 Write completed with error (sct=0, sc=8) 00:20:50.103 starting I/O failed: -6 00:20:50.103 Write completed with error (sct=0, sc=8) 00:20:50.103 Write completed with error (sct=0, sc=8) 00:20:50.103 starting I/O failed: -6 00:20:50.103 Write completed with error (sct=0, sc=8) 00:20:50.103 starting I/O failed: -6 00:20:50.103 Write completed with error (sct=0, sc=8) 00:20:50.103 starting I/O failed: -6 00:20:50.103 Write completed with error (sct=0, sc=8) 00:20:50.103 Write completed with error (sct=0, sc=8) 00:20:50.103 starting I/O failed: -6 00:20:50.103 Write completed with error (sct=0, sc=8) 00:20:50.103 starting I/O failed: -6 00:20:50.103 Write completed with error (sct=0, sc=8) 00:20:50.103 starting I/O failed: -6 00:20:50.103 Write completed with error (sct=0, sc=8) 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 starting I/O failed: -6 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 starting I/O failed: -6 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 starting I/O failed: -6 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 starting I/O failed: -6 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 starting I/O failed: -6 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 starting I/O failed: -6 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 starting I/O failed: -6 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 starting I/O failed: -6 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 starting I/O failed: -6 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 starting I/O failed: -6 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 starting I/O failed: -6 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 starting I/O failed: -6 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 starting I/O failed: -6 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 starting I/O failed: -6 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 starting I/O failed: -6 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 starting I/O failed: -6 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 starting I/O failed: -6 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 starting I/O failed: -6 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 starting I/O failed: -6 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 starting I/O failed: -6 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 starting I/O failed: -6 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 starting I/O failed: -6 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 starting I/O failed: -6 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 starting I/O failed: -6 00:20:50.104 [2024-12-11 14:57:32.463924] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:50.104 starting I/O failed: -6 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 starting I/O failed: -6 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 starting I/O failed: -6 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 starting I/O failed: -6 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 starting I/O failed: -6 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 starting I/O failed: -6 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 starting I/O failed: -6 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 starting I/O failed: -6 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 starting I/O failed: -6 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 starting I/O failed: -6 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 starting I/O failed: -6 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 starting I/O failed: -6 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 starting I/O failed: -6 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 starting I/O failed: -6 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 starting I/O failed: -6 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 starting I/O failed: -6 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 starting I/O failed: -6 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 starting I/O failed: -6 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 starting I/O failed: -6 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 starting I/O failed: -6 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 starting I/O failed: -6 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 starting I/O failed: -6 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 starting I/O failed: -6 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 starting I/O failed: -6 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 starting I/O failed: -6 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 starting I/O failed: -6 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 starting I/O failed: -6 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 starting I/O failed: -6 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 starting I/O failed: -6 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 starting I/O failed: -6 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 starting I/O failed: -6 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 starting I/O failed: -6 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 starting I/O failed: -6 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 starting I/O failed: -6 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 starting I/O failed: -6 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 starting I/O failed: -6 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 starting I/O failed: -6 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 starting I/O failed: -6 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 starting I/O failed: -6 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 starting I/O failed: -6 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 starting I/O failed: -6 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 starting I/O failed: -6 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 starting I/O failed: -6 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 starting I/O failed: -6 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 starting I/O failed: -6 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 starting I/O failed: -6 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 starting I/O failed: -6 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 starting I/O failed: -6 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 starting I/O failed: -6 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 starting I/O failed: -6 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 starting I/O failed: -6 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 starting I/O failed: -6 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 starting I/O failed: -6 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 starting I/O failed: -6 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 starting I/O failed: -6 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 starting I/O failed: -6 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 starting I/O failed: -6 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 starting I/O failed: -6 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 starting I/O failed: -6 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 starting I/O failed: -6 00:20:50.104 [2024-12-11 14:57:32.466397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:20:50.104 NVMe io qpair process completion error 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 starting I/O failed: -6 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 starting I/O failed: -6 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 starting I/O failed: -6 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 starting I/O failed: -6 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 starting I/O failed: -6 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 starting I/O failed: -6 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 starting I/O failed: -6 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 starting I/O failed: -6 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 starting I/O failed: -6 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 Write completed with error (sct=0, sc=8) 00:20:50.104 starting I/O failed: -6 00:20:50.105 [2024-12-11 14:57:32.467700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 starting I/O failed: -6 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 starting I/O failed: -6 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 starting I/O failed: -6 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 starting I/O failed: -6 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 starting I/O failed: -6 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 starting I/O failed: -6 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 starting I/O failed: -6 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 starting I/O failed: -6 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 starting I/O failed: -6 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 starting I/O failed: -6 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 starting I/O failed: -6 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 starting I/O failed: -6 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 starting I/O failed: -6 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 starting I/O failed: -6 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 starting I/O failed: -6 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 starting I/O failed: -6 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 starting I/O failed: -6 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 starting I/O failed: -6 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 starting I/O failed: -6 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 starting I/O failed: -6 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 starting I/O failed: -6 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 starting I/O failed: -6 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 starting I/O failed: -6 00:20:50.105 [2024-12-11 14:57:32.468810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:50.105 starting I/O failed: -6 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 starting I/O failed: -6 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 starting I/O failed: -6 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 starting I/O failed: -6 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 starting I/O failed: -6 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 starting I/O failed: -6 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 starting I/O failed: -6 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 starting I/O failed: -6 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 starting I/O failed: -6 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 starting I/O failed: -6 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 starting I/O failed: -6 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 starting I/O failed: -6 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 starting I/O failed: -6 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 starting I/O failed: -6 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 starting I/O failed: -6 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 starting I/O failed: -6 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 starting I/O failed: -6 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 starting I/O failed: -6 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 starting I/O failed: -6 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 starting I/O failed: -6 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 starting I/O failed: -6 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 starting I/O failed: -6 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 starting I/O failed: -6 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 starting I/O failed: -6 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 starting I/O failed: -6 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 starting I/O failed: -6 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 starting I/O failed: -6 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 starting I/O failed: -6 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 starting I/O failed: -6 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 starting I/O failed: -6 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 starting I/O failed: -6 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 starting I/O failed: -6 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 starting I/O failed: -6 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 starting I/O failed: -6 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 starting I/O failed: -6 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 starting I/O failed: -6 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 starting I/O failed: -6 00:20:50.105 [2024-12-11 14:57:32.469984] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 starting I/O failed: -6 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 starting I/O failed: -6 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 starting I/O failed: -6 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 starting I/O failed: -6 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 starting I/O failed: -6 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 starting I/O failed: -6 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 starting I/O failed: -6 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 starting I/O failed: -6 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 starting I/O failed: -6 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 starting I/O failed: -6 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 starting I/O failed: -6 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 starting I/O failed: -6 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 starting I/O failed: -6 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 starting I/O failed: -6 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 starting I/O failed: -6 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 starting I/O failed: -6 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 starting I/O failed: -6 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 starting I/O failed: -6 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 starting I/O failed: -6 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 starting I/O failed: -6 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 starting I/O failed: -6 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 starting I/O failed: -6 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 starting I/O failed: -6 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 starting I/O failed: -6 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 starting I/O failed: -6 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 starting I/O failed: -6 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 starting I/O failed: -6 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 starting I/O failed: -6 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 starting I/O failed: -6 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 starting I/O failed: -6 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 starting I/O failed: -6 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 starting I/O failed: -6 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 starting I/O failed: -6 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 starting I/O failed: -6 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.105 starting I/O failed: -6 00:20:50.105 Write completed with error (sct=0, sc=8) 00:20:50.106 starting I/O failed: -6 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 starting I/O failed: -6 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 starting I/O failed: -6 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 starting I/O failed: -6 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 starting I/O failed: -6 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 starting I/O failed: -6 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 starting I/O failed: -6 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 starting I/O failed: -6 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 starting I/O failed: -6 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 starting I/O failed: -6 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 starting I/O failed: -6 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 starting I/O failed: -6 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 starting I/O failed: -6 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 starting I/O failed: -6 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 starting I/O failed: -6 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 starting I/O failed: -6 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 starting I/O failed: -6 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 starting I/O failed: -6 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 starting I/O failed: -6 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 starting I/O failed: -6 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 starting I/O failed: -6 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 starting I/O failed: -6 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 starting I/O failed: -6 00:20:50.106 [2024-12-11 14:57:32.471683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:50.106 NVMe io qpair process completion error 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 starting I/O failed: -6 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 starting I/O failed: -6 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 starting I/O failed: -6 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 starting I/O failed: -6 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 starting I/O failed: -6 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 starting I/O failed: -6 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 starting I/O failed: -6 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 starting I/O failed: -6 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 starting I/O failed: -6 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 [2024-12-11 14:57:32.472938] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 starting I/O failed: -6 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 starting I/O failed: -6 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 starting I/O failed: -6 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 starting I/O failed: -6 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 starting I/O failed: -6 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 starting I/O failed: -6 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 starting I/O failed: -6 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 starting I/O failed: -6 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 starting I/O failed: -6 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 starting I/O failed: -6 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 starting I/O failed: -6 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 starting I/O failed: -6 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 starting I/O failed: -6 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 starting I/O failed: -6 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 starting I/O failed: -6 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 starting I/O failed: -6 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 starting I/O failed: -6 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 starting I/O failed: -6 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 starting I/O failed: -6 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 starting I/O failed: -6 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 starting I/O failed: -6 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 starting I/O failed: -6 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 [2024-12-11 14:57:32.474013] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 starting I/O failed: -6 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 starting I/O failed: -6 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 starting I/O failed: -6 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 starting I/O failed: -6 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 starting I/O failed: -6 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 starting I/O failed: -6 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 starting I/O failed: -6 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 starting I/O failed: -6 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 starting I/O failed: -6 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 starting I/O failed: -6 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 starting I/O failed: -6 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 starting I/O failed: -6 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 starting I/O failed: -6 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 starting I/O failed: -6 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 starting I/O failed: -6 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 starting I/O failed: -6 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 starting I/O failed: -6 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 starting I/O failed: -6 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 starting I/O failed: -6 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 starting I/O failed: -6 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 starting I/O failed: -6 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 starting I/O failed: -6 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 starting I/O failed: -6 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 starting I/O failed: -6 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 starting I/O failed: -6 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 starting I/O failed: -6 00:20:50.106 Write completed with error (sct=0, sc=8) 00:20:50.106 starting I/O failed: -6 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 starting I/O failed: -6 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 starting I/O failed: -6 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 starting I/O failed: -6 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 starting I/O failed: -6 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 starting I/O failed: -6 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 starting I/O failed: -6 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 starting I/O failed: -6 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 starting I/O failed: -6 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 starting I/O failed: -6 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 starting I/O failed: -6 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 starting I/O failed: -6 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 starting I/O failed: -6 00:20:50.107 [2024-12-11 14:57:32.475230] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 starting I/O failed: -6 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 starting I/O failed: -6 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 starting I/O failed: -6 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 starting I/O failed: -6 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 starting I/O failed: -6 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 starting I/O failed: -6 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 starting I/O failed: -6 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 starting I/O failed: -6 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 starting I/O failed: -6 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 starting I/O failed: -6 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 starting I/O failed: -6 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 starting I/O failed: -6 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 starting I/O failed: -6 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 starting I/O failed: -6 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 starting I/O failed: -6 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 starting I/O failed: -6 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 starting I/O failed: -6 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 starting I/O failed: -6 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 starting I/O failed: -6 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 starting I/O failed: -6 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 starting I/O failed: -6 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 starting I/O failed: -6 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 starting I/O failed: -6 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 starting I/O failed: -6 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 starting I/O failed: -6 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 starting I/O failed: -6 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 starting I/O failed: -6 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 starting I/O failed: -6 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 starting I/O failed: -6 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 starting I/O failed: -6 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 starting I/O failed: -6 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 starting I/O failed: -6 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 starting I/O failed: -6 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 starting I/O failed: -6 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 starting I/O failed: -6 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 starting I/O failed: -6 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 starting I/O failed: -6 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 starting I/O failed: -6 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 starting I/O failed: -6 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 starting I/O failed: -6 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 starting I/O failed: -6 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 starting I/O failed: -6 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 starting I/O failed: -6 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 starting I/O failed: -6 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 starting I/O failed: -6 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 starting I/O failed: -6 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 starting I/O failed: -6 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 starting I/O failed: -6 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 starting I/O failed: -6 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 starting I/O failed: -6 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 starting I/O failed: -6 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 starting I/O failed: -6 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 starting I/O failed: -6 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 starting I/O failed: -6 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 starting I/O failed: -6 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 starting I/O failed: -6 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 starting I/O failed: -6 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 starting I/O failed: -6 00:20:50.107 [2024-12-11 14:57:32.478116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:20:50.107 NVMe io qpair process completion error 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 starting I/O failed: -6 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 starting I/O failed: -6 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 starting I/O failed: -6 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 starting I/O failed: -6 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 starting I/O failed: -6 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 starting I/O failed: -6 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 starting I/O failed: -6 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 starting I/O failed: -6 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 starting I/O failed: -6 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 [2024-12-11 14:57:32.479346] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:50.107 starting I/O failed: -6 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 starting I/O failed: -6 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 starting I/O failed: -6 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 starting I/O failed: -6 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 starting I/O failed: -6 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 starting I/O failed: -6 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 starting I/O failed: -6 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 starting I/O failed: -6 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 starting I/O failed: -6 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 starting I/O failed: -6 00:20:50.107 Write completed with error (sct=0, sc=8) 00:20:50.107 starting I/O failed: -6 00:20:50.108 Write completed with error (sct=0, sc=8) 00:20:50.108 Write completed with error (sct=0, sc=8) 00:20:50.108 Write completed with error (sct=0, sc=8) 00:20:50.108 starting I/O failed: -6 00:20:50.108 Write completed with error (sct=0, sc=8) 00:20:50.108 starting I/O failed: -6 00:20:50.108 Write completed with error (sct=0, sc=8) 00:20:50.108 Write completed with error (sct=0, sc=8) 00:20:50.108 Write completed with error (sct=0, sc=8) 00:20:50.108 starting I/O failed: -6 00:20:50.108 Write completed with error (sct=0, sc=8) 00:20:50.108 starting I/O failed: -6 00:20:50.108 Write completed with error (sct=0, sc=8) 00:20:50.108 Write completed with error (sct=0, sc=8) 00:20:50.108 Write completed with error (sct=0, sc=8) 00:20:50.108 starting I/O failed: -6 00:20:50.108 Write completed with error (sct=0, sc=8) 00:20:50.108 starting I/O failed: -6 00:20:50.108 Write completed with error (sct=0, sc=8) 00:20:50.108 Write completed with error (sct=0, sc=8) 00:20:50.108 Write completed with error (sct=0, sc=8) 00:20:50.108 starting I/O failed: -6 00:20:50.108 Write completed with error (sct=0, sc=8) 00:20:50.108 starting I/O failed: -6 00:20:50.108 Write completed with error (sct=0, sc=8) 00:20:50.108 Write completed with error (sct=0, sc=8) 00:20:50.108 Write completed with error (sct=0, sc=8) 00:20:50.108 starting I/O failed: -6 00:20:50.108 Write completed with error (sct=0, sc=8) 00:20:50.108 starting I/O failed: -6 00:20:50.108 Write completed with error (sct=0, sc=8) 00:20:50.108 Write completed with error (sct=0, sc=8) 00:20:50.108 Write completed with error (sct=0, sc=8) 00:20:50.108 starting I/O failed: -6 00:20:50.108 Write completed with error (sct=0, sc=8) 00:20:50.108 starting I/O failed: -6 00:20:50.108 Write completed with error (sct=0, sc=8) 00:20:50.108 Write completed with error (sct=0, sc=8) 00:20:50.108 [2024-12-11 14:57:32.480481] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:50.108 Write completed with error (sct=0, sc=8) 00:20:50.108 starting I/O failed: -6 00:20:50.108 Write completed with error (sct=0, sc=8) 00:20:50.108 starting I/O failed: -6 00:20:50.108 Write completed with error (sct=0, sc=8) 00:20:50.108 Write completed with error (sct=0, sc=8) 00:20:50.108 starting I/O failed: -6 00:20:50.108 Write completed with error (sct=0, sc=8) 00:20:50.108 starting I/O failed: -6 00:20:50.108 Write completed with error (sct=0, sc=8) 00:20:50.108 starting I/O failed: -6 00:20:50.108 Write completed with error (sct=0, sc=8) 00:20:50.108 Write completed with error (sct=0, sc=8) 00:20:50.108 starting I/O failed: -6 00:20:50.108 Write completed with error (sct=0, sc=8) 00:20:50.108 starting I/O failed: -6 00:20:50.108 Write completed with error (sct=0, sc=8) 00:20:50.108 starting I/O failed: -6 00:20:50.108 Write completed with error (sct=0, sc=8) 00:20:50.108 Write completed with error (sct=0, sc=8) 00:20:50.108 starting I/O failed: -6 00:20:50.108 Write completed with error (sct=0, sc=8) 00:20:50.108 starting I/O failed: -6 00:20:50.108 Write completed with error (sct=0, sc=8) 00:20:50.108 starting I/O failed: -6 00:20:50.108 Write completed with error (sct=0, sc=8) 00:20:50.108 Write completed with error (sct=0, sc=8) 00:20:50.108 starting I/O failed: -6 00:20:50.108 Write completed with error (sct=0, sc=8) 00:20:50.108 starting I/O failed: -6 00:20:50.108 Write completed with error (sct=0, sc=8) 00:20:50.108 starting I/O failed: -6 00:20:50.108 Write completed with error (sct=0, sc=8) 00:20:50.108 Write completed with error (sct=0, sc=8) 00:20:50.108 starting I/O failed: -6 00:20:50.108 Write completed with error (sct=0, sc=8) 00:20:50.108 starting I/O failed: -6 00:20:50.108 Write completed with error (sct=0, sc=8) 00:20:50.108 starting I/O failed: -6 00:20:50.108 Write completed with error (sct=0, sc=8) 00:20:50.108 Write completed with error (sct=0, sc=8) 00:20:50.108 starting I/O failed: -6 00:20:50.108 Write completed with error (sct=0, sc=8) 00:20:50.108 starting I/O failed: -6 00:20:50.108 Write completed with error (sct=0, sc=8) 00:20:50.108 starting I/O failed: -6 00:20:50.108 Write completed with error (sct=0, sc=8) 00:20:50.108 Write completed with error (sct=0, sc=8) 00:20:50.108 starting I/O failed: -6 00:20:50.108 Write completed with error (sct=0, sc=8) 00:20:50.108 starting I/O failed: -6 00:20:50.108 Write completed with error (sct=0, sc=8) 00:20:50.108 starting I/O failed: -6 00:20:50.108 Write completed with error (sct=0, sc=8) 00:20:50.108 Write completed with error (sct=0, sc=8) 00:20:50.108 starting I/O failed: -6 00:20:50.108 Write completed with error (sct=0, sc=8) 00:20:50.108 starting I/O failed: -6 00:20:50.108 Write completed with error (sct=0, sc=8) 00:20:50.108 starting I/O failed: -6 00:20:50.108 Write completed with error (sct=0, sc=8) 00:20:50.108 Write completed with error (sct=0, sc=8) 00:20:50.108 starting I/O failed: -6 00:20:50.108 Write completed with error (sct=0, sc=8) 00:20:50.108 starting I/O failed: -6 00:20:50.108 Write completed with error (sct=0, sc=8) 00:20:50.108 starting I/O failed: -6 00:20:50.108 Write completed with error (sct=0, sc=8) 00:20:50.108 Write completed with error (sct=0, sc=8) 00:20:50.108 starting I/O failed: -6 00:20:50.108 Write completed with error (sct=0, sc=8) 00:20:50.108 starting I/O failed: -6 00:20:50.108 Write completed with error (sct=0, sc=8) 00:20:50.108 starting I/O failed: -6 00:20:50.108 Write completed with error (sct=0, sc=8) 00:20:50.108 Write completed with error (sct=0, sc=8) 00:20:50.108 starting I/O failed: -6 00:20:50.108 Write completed with error (sct=0, sc=8) 00:20:50.108 starting I/O failed: -6 00:20:50.108 Write completed with error (sct=0, sc=8) 00:20:50.108 starting I/O failed: -6 00:20:50.108 Write completed with error (sct=0, sc=8) 00:20:50.108 Write completed with error (sct=0, sc=8) 00:20:50.108 starting I/O failed: -6 00:20:50.108 Write completed with error (sct=0, sc=8) 00:20:50.108 starting I/O failed: -6 00:20:50.108 Write completed with error (sct=0, sc=8) 00:20:50.108 starting I/O failed: -6 00:20:50.108 [2024-12-11 14:57:32.482065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:50.108 Write completed with error (sct=0, sc=8) 00:20:50.108 starting I/O failed: -6 00:20:50.108 Write completed with error (sct=0, sc=8) 00:20:50.108 starting I/O failed: -6 00:20:50.108 Write completed with error (sct=0, sc=8) 00:20:50.108 starting I/O failed: -6 00:20:50.108 Write completed with error (sct=0, sc=8) 00:20:50.108 starting I/O failed: -6 00:20:50.108 Write completed with error (sct=0, sc=8) 00:20:50.108 starting I/O failed: -6 00:20:50.108 Write completed with error (sct=0, sc=8) 00:20:50.108 starting I/O failed: -6 00:20:50.108 Write completed with error (sct=0, sc=8) 00:20:50.108 starting I/O failed: -6 00:20:50.108 Write completed with error (sct=0, sc=8) 00:20:50.108 starting I/O failed: -6 00:20:50.108 Write completed with error (sct=0, sc=8) 00:20:50.108 starting I/O failed: -6 00:20:50.108 Write completed with error (sct=0, sc=8) 00:20:50.108 starting I/O failed: -6 00:20:50.108 Write completed with error (sct=0, sc=8) 00:20:50.108 starting I/O failed: -6 00:20:50.108 Write completed with error (sct=0, sc=8) 00:20:50.108 starting I/O failed: -6 00:20:50.108 Write completed with error (sct=0, sc=8) 00:20:50.108 starting I/O failed: -6 00:20:50.108 Write completed with error (sct=0, sc=8) 00:20:50.108 starting I/O failed: -6 00:20:50.108 Write completed with error (sct=0, sc=8) 00:20:50.108 starting I/O failed: -6 00:20:50.108 Write completed with error (sct=0, sc=8) 00:20:50.108 starting I/O failed: -6 00:20:50.108 Write completed with error (sct=0, sc=8) 00:20:50.108 starting I/O failed: -6 00:20:50.108 Write completed with error (sct=0, sc=8) 00:20:50.108 starting I/O failed: -6 00:20:50.108 Write completed with error (sct=0, sc=8) 00:20:50.108 starting I/O failed: -6 00:20:50.108 Write completed with error (sct=0, sc=8) 00:20:50.108 starting I/O failed: -6 00:20:50.108 Write completed with error (sct=0, sc=8) 00:20:50.108 starting I/O failed: -6 00:20:50.108 Write completed with error (sct=0, sc=8) 00:20:50.108 starting I/O failed: -6 00:20:50.108 Write completed with error (sct=0, sc=8) 00:20:50.108 starting I/O failed: -6 00:20:50.108 Write completed with error (sct=0, sc=8) 00:20:50.108 starting I/O failed: -6 00:20:50.108 Write completed with error (sct=0, sc=8) 00:20:50.108 starting I/O failed: -6 00:20:50.108 Write completed with error (sct=0, sc=8) 00:20:50.108 starting I/O failed: -6 00:20:50.108 Write completed with error (sct=0, sc=8) 00:20:50.108 starting I/O failed: -6 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 starting I/O failed: -6 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 starting I/O failed: -6 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 starting I/O failed: -6 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 starting I/O failed: -6 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 starting I/O failed: -6 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 starting I/O failed: -6 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 starting I/O failed: -6 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 starting I/O failed: -6 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 starting I/O failed: -6 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 starting I/O failed: -6 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 starting I/O failed: -6 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 starting I/O failed: -6 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 starting I/O failed: -6 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 starting I/O failed: -6 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 starting I/O failed: -6 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 starting I/O failed: -6 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 starting I/O failed: -6 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 starting I/O failed: -6 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 starting I/O failed: -6 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 starting I/O failed: -6 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 starting I/O failed: -6 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 starting I/O failed: -6 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 starting I/O failed: -6 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 starting I/O failed: -6 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 starting I/O failed: -6 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 starting I/O failed: -6 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 starting I/O failed: -6 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 starting I/O failed: -6 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 starting I/O failed: -6 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 starting I/O failed: -6 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 starting I/O failed: -6 00:20:50.109 [2024-12-11 14:57:32.484446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:20:50.109 NVMe io qpair process completion error 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 starting I/O failed: -6 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 starting I/O failed: -6 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 starting I/O failed: -6 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 starting I/O failed: -6 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 starting I/O failed: -6 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 starting I/O failed: -6 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 starting I/O failed: -6 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 starting I/O failed: -6 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 starting I/O failed: -6 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 starting I/O failed: -6 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 starting I/O failed: -6 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 starting I/O failed: -6 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 starting I/O failed: -6 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 starting I/O failed: -6 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 starting I/O failed: -6 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 starting I/O failed: -6 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 starting I/O failed: -6 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 starting I/O failed: -6 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 starting I/O failed: -6 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 starting I/O failed: -6 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 starting I/O failed: -6 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 starting I/O failed: -6 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 starting I/O failed: -6 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 starting I/O failed: -6 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 starting I/O failed: -6 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 starting I/O failed: -6 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 starting I/O failed: -6 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 starting I/O failed: -6 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 starting I/O failed: -6 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 starting I/O failed: -6 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 starting I/O failed: -6 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 starting I/O failed: -6 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 starting I/O failed: -6 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 starting I/O failed: -6 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 starting I/O failed: -6 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 starting I/O failed: -6 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 starting I/O failed: -6 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 starting I/O failed: -6 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 starting I/O failed: -6 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 starting I/O failed: -6 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 starting I/O failed: -6 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 starting I/O failed: -6 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 starting I/O failed: -6 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.109 starting I/O failed: -6 00:20:50.109 Write completed with error (sct=0, sc=8) 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 starting I/O failed: -6 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 starting I/O failed: -6 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 starting I/O failed: -6 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 starting I/O failed: -6 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 starting I/O failed: -6 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 starting I/O failed: -6 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 starting I/O failed: -6 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 starting I/O failed: -6 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 starting I/O failed: -6 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 starting I/O failed: -6 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 starting I/O failed: -6 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 starting I/O failed: -6 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 starting I/O failed: -6 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 starting I/O failed: -6 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 starting I/O failed: -6 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 starting I/O failed: -6 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 starting I/O failed: -6 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 starting I/O failed: -6 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 starting I/O failed: -6 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 starting I/O failed: -6 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 starting I/O failed: -6 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 starting I/O failed: -6 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 starting I/O failed: -6 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 starting I/O failed: -6 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 starting I/O failed: -6 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 starting I/O failed: -6 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 starting I/O failed: -6 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 starting I/O failed: -6 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 starting I/O failed: -6 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 starting I/O failed: -6 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 starting I/O failed: -6 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 starting I/O failed: -6 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 starting I/O failed: -6 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 starting I/O failed: -6 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 starting I/O failed: -6 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 starting I/O failed: -6 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 starting I/O failed: -6 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 starting I/O failed: -6 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 starting I/O failed: -6 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 starting I/O failed: -6 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 starting I/O failed: -6 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 starting I/O failed: -6 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 starting I/O failed: -6 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 starting I/O failed: -6 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 starting I/O failed: -6 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 starting I/O failed: -6 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 starting I/O failed: -6 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 starting I/O failed: -6 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 starting I/O failed: -6 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 starting I/O failed: -6 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 starting I/O failed: -6 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 starting I/O failed: -6 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 starting I/O failed: -6 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 starting I/O failed: -6 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 starting I/O failed: -6 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 starting I/O failed: -6 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 starting I/O failed: -6 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 starting I/O failed: -6 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 starting I/O failed: -6 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 starting I/O failed: -6 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 starting I/O failed: -6 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 starting I/O failed: -6 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 starting I/O failed: -6 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 starting I/O failed: -6 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 starting I/O failed: -6 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 starting I/O failed: -6 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 starting I/O failed: -6 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 starting I/O failed: -6 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 starting I/O failed: -6 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 starting I/O failed: -6 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 starting I/O failed: -6 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 starting I/O failed: -6 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 starting I/O failed: -6 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 starting I/O failed: -6 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 starting I/O failed: -6 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 starting I/O failed: -6 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 starting I/O failed: -6 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 starting I/O failed: -6 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 starting I/O failed: -6 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 starting I/O failed: -6 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 starting I/O failed: -6 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 starting I/O failed: -6 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 starting I/O failed: -6 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 starting I/O failed: -6 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 starting I/O failed: -6 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 starting I/O failed: -6 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 starting I/O failed: -6 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 starting I/O failed: -6 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 starting I/O failed: -6 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 starting I/O failed: -6 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 starting I/O failed: -6 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 starting I/O failed: -6 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 starting I/O failed: -6 00:20:50.110 [2024-12-11 14:57:32.491167] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 starting I/O failed: -6 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 starting I/O failed: -6 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.110 Write completed with error (sct=0, sc=8) 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 starting I/O failed: -6 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 starting I/O failed: -6 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 starting I/O failed: -6 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 starting I/O failed: -6 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 starting I/O failed: -6 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 starting I/O failed: -6 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 starting I/O failed: -6 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 starting I/O failed: -6 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 starting I/O failed: -6 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 starting I/O failed: -6 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 starting I/O failed: -6 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 starting I/O failed: -6 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 starting I/O failed: -6 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 starting I/O failed: -6 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 starting I/O failed: -6 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 starting I/O failed: -6 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 starting I/O failed: -6 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 starting I/O failed: -6 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 starting I/O failed: -6 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 starting I/O failed: -6 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 starting I/O failed: -6 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 starting I/O failed: -6 00:20:50.111 [2024-12-11 14:57:32.492366] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 starting I/O failed: -6 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 starting I/O failed: -6 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 starting I/O failed: -6 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 starting I/O failed: -6 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 starting I/O failed: -6 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 starting I/O failed: -6 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 starting I/O failed: -6 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 starting I/O failed: -6 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 starting I/O failed: -6 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 starting I/O failed: -6 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 starting I/O failed: -6 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 starting I/O failed: -6 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 starting I/O failed: -6 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 starting I/O failed: -6 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 starting I/O failed: -6 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 starting I/O failed: -6 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 starting I/O failed: -6 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 starting I/O failed: -6 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 starting I/O failed: -6 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 starting I/O failed: -6 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 starting I/O failed: -6 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 starting I/O failed: -6 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 starting I/O failed: -6 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 starting I/O failed: -6 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 starting I/O failed: -6 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 starting I/O failed: -6 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 starting I/O failed: -6 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 starting I/O failed: -6 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 starting I/O failed: -6 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 starting I/O failed: -6 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 starting I/O failed: -6 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 starting I/O failed: -6 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 starting I/O failed: -6 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 starting I/O failed: -6 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 starting I/O failed: -6 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 starting I/O failed: -6 00:20:50.111 [2024-12-11 14:57:32.493506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 starting I/O failed: -6 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 starting I/O failed: -6 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 starting I/O failed: -6 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 starting I/O failed: -6 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 starting I/O failed: -6 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 starting I/O failed: -6 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 starting I/O failed: -6 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 starting I/O failed: -6 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 starting I/O failed: -6 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 starting I/O failed: -6 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 starting I/O failed: -6 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 starting I/O failed: -6 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 starting I/O failed: -6 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 starting I/O failed: -6 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 starting I/O failed: -6 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 starting I/O failed: -6 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 starting I/O failed: -6 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 starting I/O failed: -6 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 starting I/O failed: -6 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 starting I/O failed: -6 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 starting I/O failed: -6 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 starting I/O failed: -6 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 starting I/O failed: -6 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 starting I/O failed: -6 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 starting I/O failed: -6 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 starting I/O failed: -6 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 starting I/O failed: -6 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 starting I/O failed: -6 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 starting I/O failed: -6 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 starting I/O failed: -6 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 starting I/O failed: -6 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 starting I/O failed: -6 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 starting I/O failed: -6 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 starting I/O failed: -6 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 starting I/O failed: -6 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 starting I/O failed: -6 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 starting I/O failed: -6 00:20:50.111 Write completed with error (sct=0, sc=8) 00:20:50.111 starting I/O failed: -6 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 starting I/O failed: -6 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 starting I/O failed: -6 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 starting I/O failed: -6 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 starting I/O failed: -6 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 starting I/O failed: -6 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 starting I/O failed: -6 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 starting I/O failed: -6 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 starting I/O failed: -6 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 starting I/O failed: -6 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 starting I/O failed: -6 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 starting I/O failed: -6 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 starting I/O failed: -6 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 starting I/O failed: -6 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 starting I/O failed: -6 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 starting I/O failed: -6 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 starting I/O failed: -6 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 starting I/O failed: -6 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 starting I/O failed: -6 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 starting I/O failed: -6 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 starting I/O failed: -6 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 starting I/O failed: -6 00:20:50.112 [2024-12-11 14:57:32.495942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:20:50.112 NVMe io qpair process completion error 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 starting I/O failed: -6 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 starting I/O failed: -6 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 starting I/O failed: -6 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 starting I/O failed: -6 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 starting I/O failed: -6 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 starting I/O failed: -6 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 starting I/O failed: -6 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 starting I/O failed: -6 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 starting I/O failed: -6 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 starting I/O failed: -6 00:20:50.112 [2024-12-11 14:57:32.497305] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 starting I/O failed: -6 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 starting I/O failed: -6 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 starting I/O failed: -6 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 starting I/O failed: -6 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 starting I/O failed: -6 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 starting I/O failed: -6 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 starting I/O failed: -6 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 starting I/O failed: -6 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 starting I/O failed: -6 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 starting I/O failed: -6 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 starting I/O failed: -6 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 starting I/O failed: -6 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 starting I/O failed: -6 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 starting I/O failed: -6 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 starting I/O failed: -6 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 starting I/O failed: -6 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 starting I/O failed: -6 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 starting I/O failed: -6 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 starting I/O failed: -6 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 starting I/O failed: -6 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 starting I/O failed: -6 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 starting I/O failed: -6 00:20:50.112 [2024-12-11 14:57:32.498359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 starting I/O failed: -6 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 starting I/O failed: -6 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 starting I/O failed: -6 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 starting I/O failed: -6 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 starting I/O failed: -6 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 starting I/O failed: -6 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 starting I/O failed: -6 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 starting I/O failed: -6 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 starting I/O failed: -6 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 starting I/O failed: -6 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 starting I/O failed: -6 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 starting I/O failed: -6 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 starting I/O failed: -6 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 starting I/O failed: -6 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 starting I/O failed: -6 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 starting I/O failed: -6 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 starting I/O failed: -6 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 starting I/O failed: -6 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 starting I/O failed: -6 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 starting I/O failed: -6 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 starting I/O failed: -6 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 starting I/O failed: -6 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 starting I/O failed: -6 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 starting I/O failed: -6 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 starting I/O failed: -6 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 starting I/O failed: -6 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.112 starting I/O failed: -6 00:20:50.112 Write completed with error (sct=0, sc=8) 00:20:50.113 starting I/O failed: -6 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 starting I/O failed: -6 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 starting I/O failed: -6 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 starting I/O failed: -6 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 starting I/O failed: -6 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 starting I/O failed: -6 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 starting I/O failed: -6 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 starting I/O failed: -6 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 starting I/O failed: -6 00:20:50.113 [2024-12-11 14:57:32.499501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 starting I/O failed: -6 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 starting I/O failed: -6 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 starting I/O failed: -6 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 starting I/O failed: -6 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 starting I/O failed: -6 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 starting I/O failed: -6 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 starting I/O failed: -6 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 starting I/O failed: -6 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 starting I/O failed: -6 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 starting I/O failed: -6 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 starting I/O failed: -6 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 starting I/O failed: -6 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 starting I/O failed: -6 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 starting I/O failed: -6 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 starting I/O failed: -6 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 starting I/O failed: -6 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 starting I/O failed: -6 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 starting I/O failed: -6 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 starting I/O failed: -6 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 starting I/O failed: -6 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 starting I/O failed: -6 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 starting I/O failed: -6 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 starting I/O failed: -6 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 starting I/O failed: -6 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 starting I/O failed: -6 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 starting I/O failed: -6 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 starting I/O failed: -6 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 starting I/O failed: -6 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 starting I/O failed: -6 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 starting I/O failed: -6 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 starting I/O failed: -6 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 starting I/O failed: -6 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 starting I/O failed: -6 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 starting I/O failed: -6 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 starting I/O failed: -6 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 starting I/O failed: -6 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 starting I/O failed: -6 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 starting I/O failed: -6 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 starting I/O failed: -6 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 starting I/O failed: -6 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 starting I/O failed: -6 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 starting I/O failed: -6 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 starting I/O failed: -6 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 starting I/O failed: -6 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 starting I/O failed: -6 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 starting I/O failed: -6 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 starting I/O failed: -6 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 starting I/O failed: -6 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 starting I/O failed: -6 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 starting I/O failed: -6 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 starting I/O failed: -6 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 starting I/O failed: -6 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 starting I/O failed: -6 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 starting I/O failed: -6 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 starting I/O failed: -6 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 starting I/O failed: -6 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 starting I/O failed: -6 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 starting I/O failed: -6 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 starting I/O failed: -6 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 starting I/O failed: -6 00:20:50.113 [2024-12-11 14:57:32.501279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:20:50.113 NVMe io qpair process completion error 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 starting I/O failed: -6 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 starting I/O failed: -6 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 starting I/O failed: -6 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 starting I/O failed: -6 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 starting I/O failed: -6 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 starting I/O failed: -6 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 starting I/O failed: -6 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 starting I/O failed: -6 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 starting I/O failed: -6 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 starting I/O failed: -6 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 starting I/O failed: -6 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.113 Write completed with error (sct=0, sc=8) 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 starting I/O failed: -6 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 starting I/O failed: -6 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 starting I/O failed: -6 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 starting I/O failed: -6 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 starting I/O failed: -6 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 starting I/O failed: -6 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 starting I/O failed: -6 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 starting I/O failed: -6 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 starting I/O failed: -6 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 starting I/O failed: -6 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 starting I/O failed: -6 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 starting I/O failed: -6 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 starting I/O failed: -6 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 starting I/O failed: -6 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 starting I/O failed: -6 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 starting I/O failed: -6 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 starting I/O failed: -6 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 starting I/O failed: -6 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 starting I/O failed: -6 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 starting I/O failed: -6 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 starting I/O failed: -6 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 starting I/O failed: -6 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 starting I/O failed: -6 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 starting I/O failed: -6 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 starting I/O failed: -6 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 starting I/O failed: -6 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 starting I/O failed: -6 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 starting I/O failed: -6 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 starting I/O failed: -6 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 starting I/O failed: -6 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 starting I/O failed: -6 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 starting I/O failed: -6 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 starting I/O failed: -6 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 starting I/O failed: -6 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 starting I/O failed: -6 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 starting I/O failed: -6 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 starting I/O failed: -6 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 starting I/O failed: -6 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 starting I/O failed: -6 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 starting I/O failed: -6 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 starting I/O failed: -6 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 starting I/O failed: -6 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 starting I/O failed: -6 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 starting I/O failed: -6 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 starting I/O failed: -6 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 starting I/O failed: -6 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 starting I/O failed: -6 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 starting I/O failed: -6 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 starting I/O failed: -6 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 starting I/O failed: -6 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 starting I/O failed: -6 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 starting I/O failed: -6 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 starting I/O failed: -6 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 starting I/O failed: -6 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 starting I/O failed: -6 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 starting I/O failed: -6 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 starting I/O failed: -6 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 starting I/O failed: -6 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 [2024-12-11 14:57:32.504439] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 starting I/O failed: -6 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 starting I/O failed: -6 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 starting I/O failed: -6 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 starting I/O failed: -6 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 starting I/O failed: -6 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 starting I/O failed: -6 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 starting I/O failed: -6 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 starting I/O failed: -6 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 starting I/O failed: -6 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 starting I/O failed: -6 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 starting I/O failed: -6 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 starting I/O failed: -6 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 starting I/O failed: -6 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 starting I/O failed: -6 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 starting I/O failed: -6 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 starting I/O failed: -6 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 starting I/O failed: -6 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 starting I/O failed: -6 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 starting I/O failed: -6 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 starting I/O failed: -6 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 starting I/O failed: -6 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 starting I/O failed: -6 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 starting I/O failed: -6 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 starting I/O failed: -6 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 starting I/O failed: -6 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 starting I/O failed: -6 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 starting I/O failed: -6 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 starting I/O failed: -6 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 starting I/O failed: -6 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 starting I/O failed: -6 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 starting I/O failed: -6 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 starting I/O failed: -6 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 starting I/O failed: -6 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 starting I/O failed: -6 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 starting I/O failed: -6 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 starting I/O failed: -6 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 starting I/O failed: -6 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 starting I/O failed: -6 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 starting I/O failed: -6 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 starting I/O failed: -6 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 starting I/O failed: -6 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.114 starting I/O failed: -6 00:20:50.114 Write completed with error (sct=0, sc=8) 00:20:50.115 starting I/O failed: -6 00:20:50.115 Write completed with error (sct=0, sc=8) 00:20:50.115 starting I/O failed: -6 00:20:50.115 Write completed with error (sct=0, sc=8) 00:20:50.115 starting I/O failed: -6 00:20:50.115 Write completed with error (sct=0, sc=8) 00:20:50.115 starting I/O failed: -6 00:20:50.115 Write completed with error (sct=0, sc=8) 00:20:50.115 starting I/O failed: -6 00:20:50.115 Write completed with error (sct=0, sc=8) 00:20:50.115 starting I/O failed: -6 00:20:50.115 Write completed with error (sct=0, sc=8) 00:20:50.115 starting I/O failed: -6 00:20:50.115 Write completed with error (sct=0, sc=8) 00:20:50.115 starting I/O failed: -6 00:20:50.115 Write completed with error (sct=0, sc=8) 00:20:50.115 starting I/O failed: -6 00:20:50.115 Write completed with error (sct=0, sc=8) 00:20:50.115 starting I/O failed: -6 00:20:50.115 Write completed with error (sct=0, sc=8) 00:20:50.115 starting I/O failed: -6 00:20:50.115 Write completed with error (sct=0, sc=8) 00:20:50.115 starting I/O failed: -6 00:20:50.115 Write completed with error (sct=0, sc=8) 00:20:50.115 starting I/O failed: -6 00:20:50.115 Write completed with error (sct=0, sc=8) 00:20:50.115 starting I/O failed: -6 00:20:50.115 Write completed with error (sct=0, sc=8) 00:20:50.115 starting I/O failed: -6 00:20:50.115 Write completed with error (sct=0, sc=8) 00:20:50.115 starting I/O failed: -6 00:20:50.115 Write completed with error (sct=0, sc=8) 00:20:50.115 starting I/O failed: -6 00:20:50.115 [2024-12-11 14:57:32.507706] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:50.115 NVMe io qpair process completion error 00:20:50.115 Initializing NVMe Controllers 00:20:50.115 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:20:50.115 Controller IO queue size 128, less than required. 00:20:50.115 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:50.115 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:20:50.115 Controller IO queue size 128, less than required. 00:20:50.115 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:50.115 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:20:50.115 Controller IO queue size 128, less than required. 00:20:50.115 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:50.115 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:20:50.115 Controller IO queue size 128, less than required. 00:20:50.115 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:50.115 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:20:50.115 Controller IO queue size 128, less than required. 00:20:50.115 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:50.115 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:50.115 Controller IO queue size 128, less than required. 00:20:50.115 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:50.115 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:20:50.115 Controller IO queue size 128, less than required. 00:20:50.115 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:50.115 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:20:50.115 Controller IO queue size 128, less than required. 00:20:50.115 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:50.115 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:20:50.115 Controller IO queue size 128, less than required. 00:20:50.115 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:50.115 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:20:50.115 Controller IO queue size 128, less than required. 00:20:50.115 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:50.115 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:20:50.115 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:20:50.115 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:20:50.115 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:20:50.115 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:20:50.115 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:50.115 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:20:50.115 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:20:50.115 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:20:50.115 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:20:50.115 Initialization complete. Launching workers. 00:20:50.115 ======================================================== 00:20:50.115 Latency(us) 00:20:50.115 Device Information : IOPS MiB/s Average min max 00:20:50.115 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1844.36 79.25 69421.52 1128.46 121222.02 00:20:50.115 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1868.27 80.28 68558.35 1073.33 120871.80 00:20:50.115 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1862.35 80.02 68820.57 1038.82 119274.99 00:20:50.115 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1858.96 79.88 68968.94 845.73 126065.61 00:20:50.115 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1863.40 80.07 68845.49 850.38 117446.04 00:20:50.115 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1843.72 79.22 68853.34 1032.63 117543.62 00:20:50.115 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1848.17 79.41 69395.54 894.19 128814.75 00:20:50.115 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1864.25 80.10 68816.73 1046.27 116649.26 00:20:50.115 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1851.55 79.56 68565.84 1156.50 115192.57 00:20:50.115 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1849.44 79.47 68671.67 1085.95 114440.07 00:20:50.115 ======================================================== 00:20:50.115 Total : 18554.47 797.26 68891.15 845.73 128814.75 00:20:50.115 00:20:50.115 [2024-12-11 14:57:32.513916] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a16b0 is same with the state(6) to be set 00:20:50.115 [2024-12-11 14:57:32.514022] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a19e0 is same with the state(6) to be set 00:20:50.115 [2024-12-11 14:57:32.514081] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a3ae0 is same with the state(6) to be set 00:20:50.115 [2024-12-11 14:57:32.514139] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a3900 is same with the state(6) to be set 00:20:50.115 [2024-12-11 14:57:32.514196] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a22c0 is same with the state(6) to be set 00:20:50.115 [2024-12-11 14:57:32.514253] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a3720 is same with the state(6) to be set 00:20:50.115 [2024-12-11 14:57:32.514312] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a2920 is same with the state(6) to be set 00:20:50.115 [2024-12-11 14:57:32.514368] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a25f0 is same with the state(6) to be set 00:20:50.115 [2024-12-11 14:57:32.514425] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a1d10 is same with the state(6) to be set 00:20:50.115 [2024-12-11 14:57:32.514488] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a2c50 is same with the state(6) to be set 00:20:50.115 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:20:50.374 14:57:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:20:51.314 14:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 719365 00:20:51.314 14:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:20:51.314 14:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 719365 00:20:51.314 14:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:20:51.314 14:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:51.314 14:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:20:51.314 14:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:51.314 14:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 719365 00:20:51.314 14:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:20:51.314 14:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:51.314 14:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:51.314 14:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:51.314 14:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:20:51.314 14:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:20:51.314 14:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:51.314 14:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:51.314 14:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:20:51.314 14:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:51.314 14:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:20:51.314 14:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:51.314 14:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:20:51.314 14:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:51.314 14:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:51.314 rmmod nvme_tcp 00:20:51.314 rmmod nvme_fabrics 00:20:51.314 rmmod nvme_keyring 00:20:51.314 14:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:51.314 14:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:20:51.314 14:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:20:51.314 14:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 719303 ']' 00:20:51.314 14:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 719303 00:20:51.315 14:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 719303 ']' 00:20:51.315 14:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 719303 00:20:51.315 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (719303) - No such process 00:20:51.315 14:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 719303 is not found' 00:20:51.315 Process with pid 719303 is not found 00:20:51.315 14:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:51.315 14:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:51.315 14:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:51.315 14:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:20:51.315 14:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:20:51.315 14:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:51.315 14:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:20:51.315 14:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:51.315 14:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:51.315 14:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:51.315 14:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:51.315 14:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:53.851 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:53.851 00:20:53.851 real 0m9.831s 00:20:53.851 user 0m23.899s 00:20:53.851 sys 0m5.719s 00:20:53.851 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:53.851 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:53.851 ************************************ 00:20:53.851 END TEST nvmf_shutdown_tc4 00:20:53.851 ************************************ 00:20:53.851 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:20:53.851 00:20:53.851 real 0m36.911s 00:20:53.851 user 1m38.734s 00:20:53.851 sys 0m11.992s 00:20:53.851 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:53.851 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:53.851 ************************************ 00:20:53.851 END TEST nvmf_shutdown 00:20:53.851 ************************************ 00:20:53.851 14:57:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:20:53.851 14:57:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:53.851 14:57:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:53.851 14:57:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:53.851 ************************************ 00:20:53.851 START TEST nvmf_nsid 00:20:53.851 ************************************ 00:20:53.851 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:20:53.851 * Looking for test storage... 00:20:53.851 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:53.851 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:53.851 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:20:53.851 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:53.851 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:53.851 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:53.851 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:53.851 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:53.851 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:20:53.851 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:20:53.851 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:20:53.851 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:20:53.851 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:20:53.851 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:20:53.851 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:20:53.851 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:53.851 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:20:53.851 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:20:53.851 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:53.851 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:53.851 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:20:53.851 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:20:53.851 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:53.851 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:20:53.851 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:20:53.851 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:20:53.851 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:20:53.851 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:53.851 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:20:53.851 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:20:53.851 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:53.851 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:53.851 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:20:53.851 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:53.851 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:53.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:53.851 --rc genhtml_branch_coverage=1 00:20:53.851 --rc genhtml_function_coverage=1 00:20:53.851 --rc genhtml_legend=1 00:20:53.851 --rc geninfo_all_blocks=1 00:20:53.851 --rc geninfo_unexecuted_blocks=1 00:20:53.851 00:20:53.851 ' 00:20:53.851 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:53.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:53.851 --rc genhtml_branch_coverage=1 00:20:53.851 --rc genhtml_function_coverage=1 00:20:53.851 --rc genhtml_legend=1 00:20:53.851 --rc geninfo_all_blocks=1 00:20:53.851 --rc geninfo_unexecuted_blocks=1 00:20:53.851 00:20:53.851 ' 00:20:53.851 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:53.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:53.851 --rc genhtml_branch_coverage=1 00:20:53.851 --rc genhtml_function_coverage=1 00:20:53.851 --rc genhtml_legend=1 00:20:53.851 --rc geninfo_all_blocks=1 00:20:53.851 --rc geninfo_unexecuted_blocks=1 00:20:53.851 00:20:53.851 ' 00:20:53.851 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:53.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:53.851 --rc genhtml_branch_coverage=1 00:20:53.851 --rc genhtml_function_coverage=1 00:20:53.851 --rc genhtml_legend=1 00:20:53.851 --rc geninfo_all_blocks=1 00:20:53.851 --rc geninfo_unexecuted_blocks=1 00:20:53.851 00:20:53.851 ' 00:20:53.851 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:53.851 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:20:53.851 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:53.851 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:53.851 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:53.851 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:53.851 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:53.851 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:53.851 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:53.851 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:53.851 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:53.851 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:53.851 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:53.851 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:53.852 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:53.852 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:53.852 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:53.852 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:53.852 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:53.852 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:20:53.852 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:53.852 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:53.852 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:53.852 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.852 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.852 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.852 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:20:53.852 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.852 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:20:53.852 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:53.852 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:53.852 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:53.852 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:53.852 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:53.852 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:53.852 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:53.852 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:53.852 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:53.852 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:53.852 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:20:53.852 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:20:53.852 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:20:53.852 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:20:53.852 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:20:53.852 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:20:53.852 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:53.852 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:53.852 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:53.852 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:53.852 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:53.852 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:53.852 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:53.852 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:53.852 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:53.852 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:53.852 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:20:53.852 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:55.758 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:55.758 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:20:55.758 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:55.758 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:55.758 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:55.758 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:55.758 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:55.758 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:20:55.758 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:55.758 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:20:55.758 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:20:55.758 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:20:55.758 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:20:55.758 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:20:55.758 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:20:55.758 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:55.758 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:55.758 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:55.758 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:55.758 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:55.758 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:55.758 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:55.758 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:55.758 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:55.758 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:55.758 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:55.758 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:55.758 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:55.758 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:55.758 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:55.758 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:55.758 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:55.758 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:55.758 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:55.758 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:55.758 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:55.758 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:55.758 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:55.758 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:55.758 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:55.758 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:55.758 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:55.758 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:55.758 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:55.758 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:55.758 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:55.758 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:55.758 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:55.758 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:55.758 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:55.758 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:55.758 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:55.758 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:55.758 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:55.758 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:55.758 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:55.758 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:55.758 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:55.758 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:55.758 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:55.758 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:55.758 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:55.758 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:55.758 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:55.758 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:55.758 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:55.758 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:55.758 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:55.758 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:55.758 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:55.758 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:55.758 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:55.758 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:55.758 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:20:55.758 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:55.758 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:55.758 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:55.758 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:55.758 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:55.758 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:55.758 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:55.758 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:55.758 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:55.758 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:55.758 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:55.758 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:55.758 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:55.758 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:55.758 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:55.758 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:55.758 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:55.758 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:56.017 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:56.017 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:56.017 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:56.017 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:56.017 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:56.017 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:56.017 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:56.017 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:56.017 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:56.017 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.297 ms 00:20:56.017 00:20:56.017 --- 10.0.0.2 ping statistics --- 00:20:56.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:56.017 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:20:56.017 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:56.017 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:56.017 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:20:56.017 00:20:56.017 --- 10.0.0.1 ping statistics --- 00:20:56.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:56.017 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:20:56.017 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:56.017 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:20:56.017 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:56.017 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:56.017 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:56.017 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:56.017 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:56.017 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:56.017 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:56.017 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:20:56.017 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:56.017 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:56.017 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:56.017 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=722108 00:20:56.017 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:20:56.017 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 722108 00:20:56.017 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 722108 ']' 00:20:56.017 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:56.017 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:56.017 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:56.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:56.017 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:56.017 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:56.017 [2024-12-11 14:57:38.689540] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:20:56.017 [2024-12-11 14:57:38.689648] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:56.017 [2024-12-11 14:57:38.761030] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:56.276 [2024-12-11 14:57:38.817237] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:56.276 [2024-12-11 14:57:38.817291] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:56.276 [2024-12-11 14:57:38.817319] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:56.276 [2024-12-11 14:57:38.817330] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:56.276 [2024-12-11 14:57:38.817339] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:56.276 [2024-12-11 14:57:38.817975] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:56.276 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:56.276 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:20:56.276 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:56.276 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:56.276 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:56.276 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:56.276 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:20:56.276 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=722193 00:20:56.276 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:20:56.276 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:20:56.276 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:20:56.276 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:20:56.276 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:56.276 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:56.276 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:56.276 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:56.276 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:56.276 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:56.277 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:56.277 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:56.277 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:56.277 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:20:56.277 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:20:56.277 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=9d9e518e-21c6-48b0-9c6b-6a3b076e2f7a 00:20:56.277 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:20:56.277 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=5019b57c-2615-4595-80ff-d1c58acba35c 00:20:56.277 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:20:56.277 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=bea57935-8ffb-4fd7-bee3-9bafa0ed3736 00:20:56.277 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:20:56.277 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.277 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:56.277 null0 00:20:56.277 null1 00:20:56.277 null2 00:20:56.277 [2024-12-11 14:57:39.003133] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:56.277 [2024-12-11 14:57:39.016440] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:20:56.277 [2024-12-11 14:57:39.016508] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid722193 ] 00:20:56.277 [2024-12-11 14:57:39.027322] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:56.536 14:57:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.536 14:57:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 722193 /var/tmp/tgt2.sock 00:20:56.536 14:57:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 722193 ']' 00:20:56.536 14:57:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:20:56.536 14:57:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:56.536 14:57:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:20:56.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:20:56.536 14:57:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:56.536 14:57:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:56.536 [2024-12-11 14:57:39.083775] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:56.536 [2024-12-11 14:57:39.141345] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:20:56.795 14:57:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:56.795 14:57:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:20:56.795 14:57:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:20:57.363 [2024-12-11 14:57:39.832633] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:57.363 [2024-12-11 14:57:39.848855] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:20:57.363 nvme0n1 nvme0n2 00:20:57.363 nvme1n1 00:20:57.363 14:57:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:20:57.363 14:57:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:20:57.363 14:57:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:57.933 14:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:20:57.933 14:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:20:57.933 14:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:20:57.933 14:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:20:57.933 14:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:20:57.933 14:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:20:57.933 14:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:20:57.933 14:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:20:57.933 14:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:20:57.933 14:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:20:57.933 14:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:20:57.933 14:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:20:57.933 14:57:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:20:58.870 14:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:20:58.870 14:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:20:58.870 14:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:20:58.870 14:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:20:58.870 14:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:20:58.870 14:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 9d9e518e-21c6-48b0-9c6b-6a3b076e2f7a 00:20:58.870 14:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:20:58.870 14:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:20:58.870 14:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:20:58.870 14:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:20:58.870 14:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:20:58.870 14:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=9d9e518e21c648b09c6b6a3b076e2f7a 00:20:58.870 14:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 9D9E518E21C648B09C6B6A3B076E2F7A 00:20:58.870 14:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 9D9E518E21C648B09C6B6A3B076E2F7A == \9\D\9\E\5\1\8\E\2\1\C\6\4\8\B\0\9\C\6\B\6\A\3\B\0\7\6\E\2\F\7\A ]] 00:20:58.870 14:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:20:58.870 14:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:20:58.870 14:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:20:58.870 14:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:20:58.870 14:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:20:58.870 14:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:20:58.870 14:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:20:58.870 14:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 5019b57c-2615-4595-80ff-d1c58acba35c 00:20:58.870 14:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:20:58.870 14:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:20:58.870 14:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:20:58.870 14:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:20:58.870 14:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:20:58.870 14:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=5019b57c2615459580ffd1c58acba35c 00:20:58.871 14:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 5019B57C2615459580FFD1C58ACBA35C 00:20:58.871 14:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 5019B57C2615459580FFD1C58ACBA35C == \5\0\1\9\B\5\7\C\2\6\1\5\4\5\9\5\8\0\F\F\D\1\C\5\8\A\C\B\A\3\5\C ]] 00:20:58.871 14:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:20:58.871 14:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:20:58.871 14:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:20:58.871 14:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:20:58.871 14:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:20:58.871 14:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:20:58.871 14:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:20:58.871 14:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid bea57935-8ffb-4fd7-bee3-9bafa0ed3736 00:20:58.871 14:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:20:58.871 14:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:20:58.871 14:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:20:58.871 14:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:20:58.871 14:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:20:59.129 14:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=bea579358ffb4fd7bee39bafa0ed3736 00:20:59.129 14:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo BEA579358FFB4FD7BEE39BAFA0ED3736 00:20:59.129 14:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ BEA579358FFB4FD7BEE39BAFA0ED3736 == \B\E\A\5\7\9\3\5\8\F\F\B\4\F\D\7\B\E\E\3\9\B\A\F\A\0\E\D\3\7\3\6 ]] 00:20:59.129 14:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:20:59.129 14:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:20:59.129 14:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:20:59.129 14:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 722193 00:20:59.129 14:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 722193 ']' 00:20:59.129 14:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 722193 00:20:59.129 14:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:20:59.129 14:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:59.129 14:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 722193 00:20:59.129 14:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:59.129 14:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:59.129 14:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 722193' 00:20:59.129 killing process with pid 722193 00:20:59.129 14:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 722193 00:20:59.129 14:57:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 722193 00:20:59.697 14:57:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:20:59.697 14:57:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:59.697 14:57:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:20:59.697 14:57:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:59.697 14:57:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:20:59.697 14:57:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:59.697 14:57:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:59.697 rmmod nvme_tcp 00:20:59.697 rmmod nvme_fabrics 00:20:59.697 rmmod nvme_keyring 00:20:59.697 14:57:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:59.697 14:57:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:20:59.697 14:57:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:20:59.697 14:57:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 722108 ']' 00:20:59.697 14:57:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 722108 00:20:59.697 14:57:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 722108 ']' 00:20:59.697 14:57:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 722108 00:20:59.697 14:57:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:20:59.697 14:57:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:59.697 14:57:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 722108 00:20:59.697 14:57:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:59.697 14:57:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:59.697 14:57:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 722108' 00:20:59.697 killing process with pid 722108 00:20:59.697 14:57:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 722108 00:20:59.697 14:57:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 722108 00:20:59.959 14:57:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:59.959 14:57:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:59.959 14:57:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:59.959 14:57:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:20:59.959 14:57:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:20:59.959 14:57:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:59.959 14:57:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:20:59.959 14:57:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:59.959 14:57:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:59.959 14:57:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:59.959 14:57:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:59.959 14:57:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:01.870 14:57:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:01.870 00:21:01.870 real 0m8.428s 00:21:01.870 user 0m8.247s 00:21:01.870 sys 0m2.719s 00:21:01.870 14:57:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:01.870 14:57:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:01.870 ************************************ 00:21:01.870 END TEST nvmf_nsid 00:21:01.870 ************************************ 00:21:01.870 14:57:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:21:01.870 00:21:01.870 real 11m40.790s 00:21:01.870 user 27m32.223s 00:21:01.870 sys 2m47.931s 00:21:01.870 14:57:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:01.870 14:57:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:01.870 ************************************ 00:21:01.870 END TEST nvmf_target_extra 00:21:01.870 ************************************ 00:21:02.148 14:57:44 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:21:02.148 14:57:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:02.148 14:57:44 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:02.148 14:57:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:02.148 ************************************ 00:21:02.148 START TEST nvmf_host 00:21:02.148 ************************************ 00:21:02.148 14:57:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:21:02.148 * Looking for test storage... 00:21:02.148 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:21:02.148 14:57:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:02.148 14:57:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:21:02.148 14:57:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:02.148 14:57:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:02.148 14:57:44 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:02.148 14:57:44 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:02.148 14:57:44 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:02.148 14:57:44 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:21:02.148 14:57:44 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:21:02.148 14:57:44 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:21:02.148 14:57:44 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:21:02.148 14:57:44 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:21:02.148 14:57:44 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:21:02.148 14:57:44 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:21:02.148 14:57:44 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:02.148 14:57:44 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:21:02.148 14:57:44 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:21:02.148 14:57:44 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:02.148 14:57:44 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:02.148 14:57:44 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:21:02.148 14:57:44 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:21:02.148 14:57:44 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:02.148 14:57:44 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:21:02.148 14:57:44 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:21:02.148 14:57:44 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:21:02.148 14:57:44 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:21:02.148 14:57:44 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:02.148 14:57:44 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:21:02.148 14:57:44 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:21:02.148 14:57:44 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:02.148 14:57:44 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:02.148 14:57:44 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:21:02.148 14:57:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:02.148 14:57:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:02.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:02.148 --rc genhtml_branch_coverage=1 00:21:02.148 --rc genhtml_function_coverage=1 00:21:02.148 --rc genhtml_legend=1 00:21:02.148 --rc geninfo_all_blocks=1 00:21:02.148 --rc geninfo_unexecuted_blocks=1 00:21:02.148 00:21:02.148 ' 00:21:02.148 14:57:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:02.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:02.148 --rc genhtml_branch_coverage=1 00:21:02.148 --rc genhtml_function_coverage=1 00:21:02.148 --rc genhtml_legend=1 00:21:02.148 --rc geninfo_all_blocks=1 00:21:02.148 --rc geninfo_unexecuted_blocks=1 00:21:02.148 00:21:02.148 ' 00:21:02.148 14:57:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:02.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:02.148 --rc genhtml_branch_coverage=1 00:21:02.148 --rc genhtml_function_coverage=1 00:21:02.148 --rc genhtml_legend=1 00:21:02.148 --rc geninfo_all_blocks=1 00:21:02.148 --rc geninfo_unexecuted_blocks=1 00:21:02.148 00:21:02.148 ' 00:21:02.148 14:57:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:02.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:02.148 --rc genhtml_branch_coverage=1 00:21:02.148 --rc genhtml_function_coverage=1 00:21:02.148 --rc genhtml_legend=1 00:21:02.148 --rc geninfo_all_blocks=1 00:21:02.148 --rc geninfo_unexecuted_blocks=1 00:21:02.148 00:21:02.148 ' 00:21:02.148 14:57:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:02.148 14:57:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:21:02.148 14:57:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:02.148 14:57:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:02.148 14:57:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:02.148 14:57:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:02.148 14:57:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:02.148 14:57:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:02.148 14:57:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:02.148 14:57:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:02.148 14:57:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:02.148 14:57:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:02.148 14:57:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:02.148 14:57:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:02.148 14:57:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:02.148 14:57:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:02.148 14:57:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:02.148 14:57:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:02.148 14:57:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:02.148 14:57:44 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:21:02.148 14:57:44 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:02.148 14:57:44 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:02.148 14:57:44 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:02.149 14:57:44 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:02.149 14:57:44 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:02.149 14:57:44 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:02.149 14:57:44 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:21:02.149 14:57:44 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:02.149 14:57:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:21:02.149 14:57:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:02.149 14:57:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:02.149 14:57:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:02.149 14:57:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:02.149 14:57:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:02.149 14:57:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:02.149 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:02.149 14:57:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:02.149 14:57:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:02.149 14:57:44 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:02.149 14:57:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:21:02.149 14:57:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:21:02.149 14:57:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:21:02.149 14:57:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:02.149 14:57:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:02.149 14:57:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:02.149 14:57:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.149 ************************************ 00:21:02.149 START TEST nvmf_multicontroller 00:21:02.149 ************************************ 00:21:02.149 14:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:02.149 * Looking for test storage... 00:21:02.149 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:02.488 14:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:02.488 14:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lcov --version 00:21:02.488 14:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:02.488 14:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:02.488 14:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:02.488 14:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:02.488 14:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:02.488 14:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:21:02.488 14:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:21:02.488 14:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:21:02.488 14:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:21:02.489 14:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:21:02.489 14:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:21:02.489 14:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:21:02.489 14:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:02.489 14:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:21:02.489 14:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:21:02.489 14:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:02.489 14:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:02.489 14:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:21:02.489 14:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:21:02.489 14:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:02.489 14:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:21:02.489 14:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:21:02.489 14:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:21:02.489 14:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:21:02.489 14:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:02.489 14:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:21:02.489 14:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:21:02.489 14:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:02.489 14:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:02.489 14:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:21:02.489 14:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:02.489 14:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:02.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:02.489 --rc genhtml_branch_coverage=1 00:21:02.489 --rc genhtml_function_coverage=1 00:21:02.489 --rc genhtml_legend=1 00:21:02.489 --rc geninfo_all_blocks=1 00:21:02.489 --rc geninfo_unexecuted_blocks=1 00:21:02.489 00:21:02.489 ' 00:21:02.489 14:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:02.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:02.489 --rc genhtml_branch_coverage=1 00:21:02.489 --rc genhtml_function_coverage=1 00:21:02.489 --rc genhtml_legend=1 00:21:02.489 --rc geninfo_all_blocks=1 00:21:02.489 --rc geninfo_unexecuted_blocks=1 00:21:02.489 00:21:02.489 ' 00:21:02.489 14:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:02.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:02.489 --rc genhtml_branch_coverage=1 00:21:02.489 --rc genhtml_function_coverage=1 00:21:02.489 --rc genhtml_legend=1 00:21:02.489 --rc geninfo_all_blocks=1 00:21:02.489 --rc geninfo_unexecuted_blocks=1 00:21:02.489 00:21:02.489 ' 00:21:02.489 14:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:02.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:02.489 --rc genhtml_branch_coverage=1 00:21:02.489 --rc genhtml_function_coverage=1 00:21:02.489 --rc genhtml_legend=1 00:21:02.489 --rc geninfo_all_blocks=1 00:21:02.489 --rc geninfo_unexecuted_blocks=1 00:21:02.489 00:21:02.489 ' 00:21:02.489 14:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:02.489 14:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:21:02.489 14:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:02.489 14:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:02.489 14:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:02.489 14:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:02.489 14:57:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:02.489 14:57:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:02.489 14:57:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:02.489 14:57:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:02.489 14:57:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:02.489 14:57:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:02.489 14:57:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:02.489 14:57:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:02.489 14:57:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:02.489 14:57:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:02.489 14:57:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:02.489 14:57:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:02.489 14:57:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:02.489 14:57:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:21:02.489 14:57:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:02.489 14:57:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:02.489 14:57:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:02.489 14:57:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:02.489 14:57:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:02.489 14:57:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:02.489 14:57:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:21:02.490 14:57:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:02.490 14:57:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:21:02.490 14:57:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:02.490 14:57:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:02.490 14:57:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:02.490 14:57:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:02.490 14:57:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:02.490 14:57:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:02.490 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:02.490 14:57:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:02.490 14:57:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:02.490 14:57:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:02.490 14:57:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:02.490 14:57:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:02.490 14:57:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:21:02.490 14:57:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:21:02.490 14:57:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:02.490 14:57:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:21:02.490 14:57:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:21:02.490 14:57:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:02.490 14:57:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:02.490 14:57:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:02.490 14:57:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:02.490 14:57:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:02.490 14:57:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:02.490 14:57:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:02.490 14:57:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:02.490 14:57:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:02.490 14:57:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:02.490 14:57:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:21:02.490 14:57:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:04.406 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:04.406 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:21:04.406 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:04.406 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:04.406 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:04.406 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:04.406 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:04.406 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:21:04.406 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:04.406 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:21:04.406 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:21:04.406 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:21:04.406 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:21:04.406 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:21:04.406 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:21:04.406 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:04.406 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:04.406 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:04.406 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:04.406 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:04.406 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:04.406 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:04.406 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:04.406 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:04.406 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:04.406 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:04.406 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:04.406 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:04.406 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:04.406 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:04.406 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:04.406 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:04.406 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:04.406 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:04.406 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:04.406 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:04.406 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:04.406 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:04.406 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:04.406 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:04.406 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:04.406 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:04.406 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:04.406 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:04.406 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:04.406 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:04.406 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:04.406 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:04.406 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:04.406 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:04.406 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:04.406 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:04.406 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:04.406 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:04.406 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:04.406 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:04.406 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:04.406 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:04.406 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:04.406 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:04.406 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:04.406 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:04.406 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:04.406 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:04.406 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:04.406 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:04.406 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:04.406 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:04.406 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:04.406 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:04.406 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:04.406 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:04.406 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:04.406 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:21:04.407 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:04.407 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:04.407 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:04.407 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:04.407 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:04.407 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:04.407 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:04.407 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:04.407 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:04.407 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:04.407 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:04.407 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:04.407 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:04.407 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:04.407 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:04.407 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:04.407 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:04.407 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:04.665 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:04.665 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:04.665 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:04.665 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:04.665 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:04.665 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:04.665 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:04.665 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:04.665 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:04.665 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.274 ms 00:21:04.665 00:21:04.665 --- 10.0.0.2 ping statistics --- 00:21:04.665 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:04.665 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:21:04.665 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:04.665 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:04.665 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:21:04.665 00:21:04.665 --- 10.0.0.1 ping statistics --- 00:21:04.665 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:04.665 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:21:04.665 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:04.665 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:21:04.665 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:04.665 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:04.665 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:04.665 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:04.665 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:04.665 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:04.665 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:04.665 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:21:04.665 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:04.665 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:04.665 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:04.665 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=724692 00:21:04.665 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:04.665 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 724692 00:21:04.665 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 724692 ']' 00:21:04.665 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:04.665 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:04.665 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:04.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:04.665 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:04.665 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:04.923 [2024-12-11 14:57:47.439117] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:21:04.923 [2024-12-11 14:57:47.439208] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:04.923 [2024-12-11 14:57:47.511552] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:04.923 [2024-12-11 14:57:47.568931] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:04.923 [2024-12-11 14:57:47.568986] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:04.923 [2024-12-11 14:57:47.569014] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:04.923 [2024-12-11 14:57:47.569025] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:04.923 [2024-12-11 14:57:47.569034] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:04.923 [2024-12-11 14:57:47.570383] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:21:04.923 [2024-12-11 14:57:47.570447] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:21:04.923 [2024-12-11 14:57:47.570451] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:21:04.923 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:04.923 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:21:04.923 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:04.923 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:04.923 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:05.181 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:05.181 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:05.181 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.181 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:05.181 [2024-12-11 14:57:47.711018] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:05.181 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.181 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:05.181 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.181 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:05.181 Malloc0 00:21:05.181 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.181 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:05.181 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.181 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:05.181 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.181 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:05.181 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.181 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:05.182 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.182 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:05.182 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.182 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:05.182 [2024-12-11 14:57:47.775950] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:05.182 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.182 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:05.182 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.182 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:05.182 [2024-12-11 14:57:47.783809] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:05.182 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.182 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:05.182 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.182 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:05.182 Malloc1 00:21:05.182 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.182 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:21:05.182 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.182 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:05.182 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.182 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:21:05.182 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.182 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:05.182 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.182 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:05.182 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.182 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:05.182 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.182 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:21:05.182 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.182 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:05.182 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.182 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=724719 00:21:05.182 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:05.182 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 724719 /var/tmp/bdevperf.sock 00:21:05.182 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:21:05.182 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 724719 ']' 00:21:05.182 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:05.182 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:05.182 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:05.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:05.182 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:05.182 14:57:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:05.442 14:57:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:05.442 14:57:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:21:05.442 14:57:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:21:05.442 14:57:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.442 14:57:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:05.702 NVMe0n1 00:21:05.702 14:57:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.702 14:57:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:05.702 14:57:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:21:05.702 14:57:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.702 14:57:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:05.702 14:57:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.702 1 00:21:05.702 14:57:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:21:05.702 14:57:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:21:05.702 14:57:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:21:05.702 14:57:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:05.702 14:57:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:05.702 14:57:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:05.702 14:57:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:05.702 14:57:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:21:05.702 14:57:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.702 14:57:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:05.702 request: 00:21:05.702 { 00:21:05.702 "name": "NVMe0", 00:21:05.702 "trtype": "tcp", 00:21:05.702 "traddr": "10.0.0.2", 00:21:05.702 "adrfam": "ipv4", 00:21:05.702 "trsvcid": "4420", 00:21:05.702 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:05.702 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:21:05.702 "hostaddr": "10.0.0.1", 00:21:05.702 "prchk_reftag": false, 00:21:05.702 "prchk_guard": false, 00:21:05.702 "hdgst": false, 00:21:05.702 "ddgst": false, 00:21:05.702 "allow_unrecognized_csi": false, 00:21:05.702 "method": "bdev_nvme_attach_controller", 00:21:05.702 "req_id": 1 00:21:05.702 } 00:21:05.702 Got JSON-RPC error response 00:21:05.702 response: 00:21:05.702 { 00:21:05.702 "code": -114, 00:21:05.702 "message": "A controller named NVMe0 already exists with the specified network path" 00:21:05.702 } 00:21:05.702 14:57:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:05.702 14:57:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:21:05.702 14:57:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:05.702 14:57:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:05.703 14:57:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:05.703 14:57:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:21:05.703 14:57:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:21:05.703 14:57:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:21:05.703 14:57:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:05.703 14:57:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:05.703 14:57:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:05.703 14:57:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:05.703 14:57:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:21:05.703 14:57:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.703 14:57:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:05.703 request: 00:21:05.703 { 00:21:05.703 "name": "NVMe0", 00:21:05.703 "trtype": "tcp", 00:21:05.703 "traddr": "10.0.0.2", 00:21:05.703 "adrfam": "ipv4", 00:21:05.703 "trsvcid": "4420", 00:21:05.703 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:05.703 "hostaddr": "10.0.0.1", 00:21:05.703 "prchk_reftag": false, 00:21:05.703 "prchk_guard": false, 00:21:05.703 "hdgst": false, 00:21:05.703 "ddgst": false, 00:21:05.703 "allow_unrecognized_csi": false, 00:21:05.703 "method": "bdev_nvme_attach_controller", 00:21:05.703 "req_id": 1 00:21:05.703 } 00:21:05.703 Got JSON-RPC error response 00:21:05.703 response: 00:21:05.703 { 00:21:05.703 "code": -114, 00:21:05.703 "message": "A controller named NVMe0 already exists with the specified network path" 00:21:05.703 } 00:21:05.703 14:57:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:05.703 14:57:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:21:05.703 14:57:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:05.703 14:57:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:05.703 14:57:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:05.703 14:57:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:21:05.703 14:57:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:21:05.703 14:57:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:21:05.703 14:57:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:05.703 14:57:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:05.703 14:57:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:05.703 14:57:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:05.703 14:57:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:21:05.703 14:57:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.703 14:57:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:05.703 request: 00:21:05.703 { 00:21:05.703 "name": "NVMe0", 00:21:05.703 "trtype": "tcp", 00:21:05.703 "traddr": "10.0.0.2", 00:21:05.703 "adrfam": "ipv4", 00:21:05.703 "trsvcid": "4420", 00:21:05.703 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:05.703 "hostaddr": "10.0.0.1", 00:21:05.703 "prchk_reftag": false, 00:21:05.703 "prchk_guard": false, 00:21:05.703 "hdgst": false, 00:21:05.703 "ddgst": false, 00:21:05.703 "multipath": "disable", 00:21:05.703 "allow_unrecognized_csi": false, 00:21:05.703 "method": "bdev_nvme_attach_controller", 00:21:05.703 "req_id": 1 00:21:05.703 } 00:21:05.703 Got JSON-RPC error response 00:21:05.703 response: 00:21:05.703 { 00:21:05.703 "code": -114, 00:21:05.703 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:21:05.703 } 00:21:05.703 14:57:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:05.703 14:57:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:21:05.703 14:57:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:05.703 14:57:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:05.703 14:57:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:05.703 14:57:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:21:05.703 14:57:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:21:05.703 14:57:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:21:05.703 14:57:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:05.703 14:57:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:05.703 14:57:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:05.703 14:57:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:05.703 14:57:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:21:05.703 14:57:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.703 14:57:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:05.703 request: 00:21:05.703 { 00:21:05.703 "name": "NVMe0", 00:21:05.703 "trtype": "tcp", 00:21:05.703 "traddr": "10.0.0.2", 00:21:05.703 "adrfam": "ipv4", 00:21:05.703 "trsvcid": "4420", 00:21:05.703 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:05.703 "hostaddr": "10.0.0.1", 00:21:05.703 "prchk_reftag": false, 00:21:05.703 "prchk_guard": false, 00:21:05.703 "hdgst": false, 00:21:05.703 "ddgst": false, 00:21:05.703 "multipath": "failover", 00:21:05.703 "allow_unrecognized_csi": false, 00:21:05.703 "method": "bdev_nvme_attach_controller", 00:21:05.703 "req_id": 1 00:21:05.703 } 00:21:05.703 Got JSON-RPC error response 00:21:05.703 response: 00:21:05.703 { 00:21:05.703 "code": -114, 00:21:05.703 "message": "A controller named NVMe0 already exists with the specified network path" 00:21:05.703 } 00:21:05.703 14:57:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:05.703 14:57:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:21:05.703 14:57:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:05.703 14:57:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:05.703 14:57:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:05.703 14:57:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:05.703 14:57:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.703 14:57:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:05.962 NVMe0n1 00:21:05.962 14:57:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.962 14:57:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:05.962 14:57:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.962 14:57:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:05.962 14:57:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.962 14:57:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:21:05.962 14:57:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.962 14:57:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:06.220 00:21:06.220 14:57:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.220 14:57:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:06.220 14:57:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:21:06.220 14:57:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.220 14:57:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:06.220 14:57:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.220 14:57:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:21:06.220 14:57:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:07.155 { 00:21:07.155 "results": [ 00:21:07.155 { 00:21:07.155 "job": "NVMe0n1", 00:21:07.155 "core_mask": "0x1", 00:21:07.155 "workload": "write", 00:21:07.155 "status": "finished", 00:21:07.155 "queue_depth": 128, 00:21:07.155 "io_size": 4096, 00:21:07.155 "runtime": 1.004174, 00:21:07.155 "iops": 18784.593108365683, 00:21:07.155 "mibps": 73.37731682955345, 00:21:07.155 "io_failed": 0, 00:21:07.155 "io_timeout": 0, 00:21:07.155 "avg_latency_us": 6802.373807237764, 00:21:07.155 "min_latency_us": 4126.34074074074, 00:21:07.155 "max_latency_us": 12379.022222222222 00:21:07.155 } 00:21:07.155 ], 00:21:07.155 "core_count": 1 00:21:07.155 } 00:21:07.155 14:57:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:21:07.155 14:57:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.155 14:57:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:07.415 14:57:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.415 14:57:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:21:07.415 14:57:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 724719 00:21:07.415 14:57:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 724719 ']' 00:21:07.415 14:57:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 724719 00:21:07.415 14:57:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:21:07.415 14:57:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:07.415 14:57:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 724719 00:21:07.415 14:57:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:07.415 14:57:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:07.415 14:57:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 724719' 00:21:07.415 killing process with pid 724719 00:21:07.415 14:57:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 724719 00:21:07.415 14:57:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 724719 00:21:07.674 14:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:07.674 14:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.674 14:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:07.674 14:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.674 14:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:07.674 14:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.674 14:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:07.674 14:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.674 14:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:21:07.674 14:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:07.674 14:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:21:07.674 14:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:21:07.674 14:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:21:07.674 14:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:21:07.674 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:07.674 [2024-12-11 14:57:47.891019] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:21:07.674 [2024-12-11 14:57:47.891107] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid724719 ] 00:21:07.674 [2024-12-11 14:57:47.959430] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:07.674 [2024-12-11 14:57:48.018234] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:21:07.674 [2024-12-11 14:57:48.777947] bdev.c:4957:bdev_name_add: *ERROR*: Bdev name df7a7546-a0a7-4393-8e24-fe15fef60a81 already exists 00:21:07.674 [2024-12-11 14:57:48.777985] bdev.c:8177:bdev_register: *ERROR*: Unable to add uuid:df7a7546-a0a7-4393-8e24-fe15fef60a81 alias for bdev NVMe1n1 00:21:07.674 [2024-12-11 14:57:48.778015] bdev_nvme.c:4666:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:21:07.674 Running I/O for 1 seconds... 00:21:07.674 18735.00 IOPS, 73.18 MiB/s 00:21:07.674 Latency(us) 00:21:07.674 [2024-12-11T13:57:50.447Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:07.674 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:21:07.674 NVMe0n1 : 1.00 18784.59 73.38 0.00 0.00 6802.37 4126.34 12379.02 00:21:07.674 [2024-12-11T13:57:50.447Z] =================================================================================================================== 00:21:07.674 [2024-12-11T13:57:50.447Z] Total : 18784.59 73.38 0.00 0.00 6802.37 4126.34 12379.02 00:21:07.674 Received shutdown signal, test time was about 1.000000 seconds 00:21:07.674 00:21:07.674 Latency(us) 00:21:07.674 [2024-12-11T13:57:50.447Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:07.674 [2024-12-11T13:57:50.447Z] =================================================================================================================== 00:21:07.674 [2024-12-11T13:57:50.447Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:07.674 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:07.674 14:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:07.674 14:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:21:07.674 14:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:21:07.674 14:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:07.674 14:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:21:07.674 14:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:07.674 14:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:21:07.674 14:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:07.674 14:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:07.674 rmmod nvme_tcp 00:21:07.674 rmmod nvme_fabrics 00:21:07.674 rmmod nvme_keyring 00:21:07.674 14:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:07.674 14:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:21:07.674 14:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:21:07.674 14:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 724692 ']' 00:21:07.674 14:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 724692 00:21:07.674 14:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 724692 ']' 00:21:07.674 14:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 724692 00:21:07.674 14:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:21:07.674 14:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:07.674 14:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 724692 00:21:07.674 14:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:07.674 14:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:07.674 14:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 724692' 00:21:07.674 killing process with pid 724692 00:21:07.674 14:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 724692 00:21:07.674 14:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 724692 00:21:07.932 14:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:07.932 14:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:07.932 14:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:07.932 14:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:21:07.932 14:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:21:07.932 14:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:07.932 14:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:21:07.932 14:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:07.932 14:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:07.932 14:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:07.932 14:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:07.932 14:57:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:10.467 14:57:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:10.467 00:21:10.467 real 0m7.769s 00:21:10.467 user 0m12.284s 00:21:10.467 sys 0m2.361s 00:21:10.467 14:57:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:10.467 14:57:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:10.467 ************************************ 00:21:10.467 END TEST nvmf_multicontroller 00:21:10.467 ************************************ 00:21:10.467 14:57:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:10.467 14:57:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:10.467 14:57:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:10.467 14:57:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.467 ************************************ 00:21:10.467 START TEST nvmf_aer 00:21:10.467 ************************************ 00:21:10.467 14:57:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:10.467 * Looking for test storage... 00:21:10.467 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:10.467 14:57:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:10.467 14:57:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lcov --version 00:21:10.467 14:57:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:10.467 14:57:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:10.467 14:57:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:10.467 14:57:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:10.467 14:57:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:10.467 14:57:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:21:10.467 14:57:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:21:10.467 14:57:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:21:10.467 14:57:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:21:10.468 14:57:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:21:10.468 14:57:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:21:10.468 14:57:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:21:10.468 14:57:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:10.468 14:57:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:21:10.468 14:57:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:21:10.468 14:57:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:10.468 14:57:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:10.468 14:57:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:21:10.468 14:57:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:21:10.468 14:57:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:10.468 14:57:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:21:10.468 14:57:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:21:10.468 14:57:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:21:10.468 14:57:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:21:10.468 14:57:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:10.468 14:57:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:21:10.468 14:57:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:21:10.468 14:57:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:10.468 14:57:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:10.468 14:57:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:21:10.468 14:57:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:10.468 14:57:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:10.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:10.468 --rc genhtml_branch_coverage=1 00:21:10.468 --rc genhtml_function_coverage=1 00:21:10.468 --rc genhtml_legend=1 00:21:10.468 --rc geninfo_all_blocks=1 00:21:10.468 --rc geninfo_unexecuted_blocks=1 00:21:10.468 00:21:10.468 ' 00:21:10.468 14:57:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:10.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:10.468 --rc genhtml_branch_coverage=1 00:21:10.468 --rc genhtml_function_coverage=1 00:21:10.468 --rc genhtml_legend=1 00:21:10.468 --rc geninfo_all_blocks=1 00:21:10.468 --rc geninfo_unexecuted_blocks=1 00:21:10.468 00:21:10.468 ' 00:21:10.468 14:57:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:10.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:10.468 --rc genhtml_branch_coverage=1 00:21:10.468 --rc genhtml_function_coverage=1 00:21:10.468 --rc genhtml_legend=1 00:21:10.468 --rc geninfo_all_blocks=1 00:21:10.468 --rc geninfo_unexecuted_blocks=1 00:21:10.468 00:21:10.468 ' 00:21:10.468 14:57:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:10.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:10.468 --rc genhtml_branch_coverage=1 00:21:10.468 --rc genhtml_function_coverage=1 00:21:10.468 --rc genhtml_legend=1 00:21:10.468 --rc geninfo_all_blocks=1 00:21:10.468 --rc geninfo_unexecuted_blocks=1 00:21:10.468 00:21:10.468 ' 00:21:10.468 14:57:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:10.468 14:57:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:21:10.468 14:57:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:10.468 14:57:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:10.468 14:57:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:10.468 14:57:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:10.468 14:57:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:10.468 14:57:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:10.468 14:57:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:10.468 14:57:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:10.468 14:57:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:10.468 14:57:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:10.468 14:57:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:10.468 14:57:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:10.468 14:57:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:10.468 14:57:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:10.468 14:57:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:10.468 14:57:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:10.468 14:57:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:10.468 14:57:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:21:10.468 14:57:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:10.468 14:57:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:10.468 14:57:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:10.468 14:57:52 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.468 14:57:52 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.468 14:57:52 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.468 14:57:52 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:21:10.468 14:57:52 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.468 14:57:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:21:10.468 14:57:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:10.468 14:57:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:10.468 14:57:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:10.468 14:57:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:10.468 14:57:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:10.468 14:57:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:10.468 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:10.468 14:57:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:10.468 14:57:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:10.468 14:57:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:10.468 14:57:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:21:10.468 14:57:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:10.468 14:57:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:10.468 14:57:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:10.468 14:57:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:10.468 14:57:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:10.468 14:57:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:10.468 14:57:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:10.468 14:57:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:10.468 14:57:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:10.468 14:57:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:10.468 14:57:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:21:10.468 14:57:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:12.371 14:57:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:12.371 14:57:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:21:12.371 14:57:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:12.371 14:57:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:12.371 14:57:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:12.371 14:57:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:12.371 14:57:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:12.371 14:57:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:21:12.371 14:57:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:12.371 14:57:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:21:12.371 14:57:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:21:12.371 14:57:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:21:12.371 14:57:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:21:12.371 14:57:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:21:12.371 14:57:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:21:12.371 14:57:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:12.371 14:57:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:12.371 14:57:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:12.371 14:57:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:12.371 14:57:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:12.371 14:57:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:12.371 14:57:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:12.371 14:57:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:12.371 14:57:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:12.371 14:57:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:12.371 14:57:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:12.371 14:57:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:12.371 14:57:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:12.371 14:57:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:12.371 14:57:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:12.371 14:57:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:12.371 14:57:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:12.371 14:57:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:12.371 14:57:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:12.371 14:57:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:12.371 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:12.371 14:57:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:12.371 14:57:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:12.372 14:57:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:12.372 14:57:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:12.372 14:57:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:12.372 14:57:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:12.372 14:57:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:12.372 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:12.372 14:57:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:12.372 14:57:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:12.372 14:57:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:12.372 14:57:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:12.372 14:57:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:12.372 14:57:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:12.372 14:57:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:12.372 14:57:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:12.372 14:57:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:12.372 14:57:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:12.372 14:57:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:12.372 14:57:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:12.372 14:57:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:12.372 14:57:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:12.372 14:57:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:12.372 14:57:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:12.372 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:12.372 14:57:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:12.372 14:57:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:12.372 14:57:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:12.372 14:57:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:12.372 14:57:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:12.372 14:57:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:12.372 14:57:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:12.372 14:57:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:12.372 14:57:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:12.372 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:12.372 14:57:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:12.372 14:57:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:12.372 14:57:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:21:12.372 14:57:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:12.372 14:57:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:12.372 14:57:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:12.372 14:57:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:12.372 14:57:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:12.372 14:57:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:12.372 14:57:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:12.372 14:57:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:12.372 14:57:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:12.372 14:57:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:12.372 14:57:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:12.372 14:57:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:12.372 14:57:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:12.372 14:57:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:12.372 14:57:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:12.372 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:12.372 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:12.372 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:12.372 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:12.372 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:12.372 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:12.372 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:12.372 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:12.372 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:12.372 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:12.372 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:12.372 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:12.372 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.343 ms 00:21:12.372 00:21:12.372 --- 10.0.0.2 ping statistics --- 00:21:12.372 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:12.372 rtt min/avg/max/mdev = 0.343/0.343/0.343/0.000 ms 00:21:12.372 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:12.372 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:12.372 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:21:12.372 00:21:12.372 --- 10.0.0.1 ping statistics --- 00:21:12.372 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:12.372 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:21:12.372 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:12.372 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:21:12.372 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:12.372 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:12.372 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:12.372 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:12.372 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:12.372 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:12.372 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:12.631 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:21:12.631 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:12.632 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:12.632 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:12.632 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=727050 00:21:12.632 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 727050 00:21:12.632 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:12.632 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 727050 ']' 00:21:12.632 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:12.632 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:12.632 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:12.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:12.632 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:12.632 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:12.632 [2024-12-11 14:57:55.202153] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:21:12.632 [2024-12-11 14:57:55.202253] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:12.632 [2024-12-11 14:57:55.276411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:12.632 [2024-12-11 14:57:55.337818] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:12.632 [2024-12-11 14:57:55.337889] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:12.632 [2024-12-11 14:57:55.337903] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:12.632 [2024-12-11 14:57:55.337930] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:12.632 [2024-12-11 14:57:55.337941] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:12.632 [2024-12-11 14:57:55.339732] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:21:12.632 [2024-12-11 14:57:55.339788] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:21:12.632 [2024-12-11 14:57:55.339843] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:21:12.632 [2024-12-11 14:57:55.339847] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:21:12.890 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:12.890 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:21:12.890 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:12.890 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:12.890 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:12.890 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:12.890 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:12.890 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.890 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:12.890 [2024-12-11 14:57:55.494675] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:12.890 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.890 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:21:12.890 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.890 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:12.890 Malloc0 00:21:12.890 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.890 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:21:12.890 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.890 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:12.890 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.891 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:12.891 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.891 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:12.891 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.891 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:12.891 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.891 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:12.891 [2024-12-11 14:57:55.558330] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:12.891 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.891 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:21:12.891 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.891 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:12.891 [ 00:21:12.891 { 00:21:12.891 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:12.891 "subtype": "Discovery", 00:21:12.891 "listen_addresses": [], 00:21:12.891 "allow_any_host": true, 00:21:12.891 "hosts": [] 00:21:12.891 }, 00:21:12.891 { 00:21:12.891 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:12.891 "subtype": "NVMe", 00:21:12.891 "listen_addresses": [ 00:21:12.891 { 00:21:12.891 "trtype": "TCP", 00:21:12.891 "adrfam": "IPv4", 00:21:12.891 "traddr": "10.0.0.2", 00:21:12.891 "trsvcid": "4420" 00:21:12.891 } 00:21:12.891 ], 00:21:12.891 "allow_any_host": true, 00:21:12.891 "hosts": [], 00:21:12.891 "serial_number": "SPDK00000000000001", 00:21:12.891 "model_number": "SPDK bdev Controller", 00:21:12.891 "max_namespaces": 2, 00:21:12.891 "min_cntlid": 1, 00:21:12.891 "max_cntlid": 65519, 00:21:12.891 "namespaces": [ 00:21:12.891 { 00:21:12.891 "nsid": 1, 00:21:12.891 "bdev_name": "Malloc0", 00:21:12.891 "name": "Malloc0", 00:21:12.891 "nguid": "77CC67C2F2DA4DCA961C0BF7ADFA8FC0", 00:21:12.891 "uuid": "77cc67c2-f2da-4dca-961c-0bf7adfa8fc0" 00:21:12.891 } 00:21:12.891 ] 00:21:12.891 } 00:21:12.891 ] 00:21:12.891 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.891 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:21:12.891 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:21:12.891 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=727086 00:21:12.891 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:21:12.891 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:21:12.891 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:21:12.891 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:12.891 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:21:12.891 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:21:12.891 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:21:13.149 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:13.149 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:21:13.149 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:21:13.149 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:21:13.149 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:13.149 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:13.149 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:21:13.149 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:21:13.149 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.149 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:13.149 Malloc1 00:21:13.149 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.149 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:21:13.149 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.149 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:13.149 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.149 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:21:13.149 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.149 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:13.149 [ 00:21:13.149 { 00:21:13.149 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:13.150 "subtype": "Discovery", 00:21:13.150 "listen_addresses": [], 00:21:13.150 "allow_any_host": true, 00:21:13.150 "hosts": [] 00:21:13.150 }, 00:21:13.150 { 00:21:13.150 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:13.150 "subtype": "NVMe", 00:21:13.150 "listen_addresses": [ 00:21:13.150 { 00:21:13.150 "trtype": "TCP", 00:21:13.150 "adrfam": "IPv4", 00:21:13.150 "traddr": "10.0.0.2", 00:21:13.150 "trsvcid": "4420" 00:21:13.150 } 00:21:13.150 ], 00:21:13.150 "allow_any_host": true, 00:21:13.150 "hosts": [], 00:21:13.150 "serial_number": "SPDK00000000000001", 00:21:13.150 "model_number": "SPDK bdev Controller", 00:21:13.150 "max_namespaces": 2, 00:21:13.150 "min_cntlid": 1, 00:21:13.150 "max_cntlid": 65519, 00:21:13.150 "namespaces": [ 00:21:13.150 { 00:21:13.150 "nsid": 1, 00:21:13.150 "bdev_name": "Malloc0", 00:21:13.150 "name": "Malloc0", 00:21:13.150 "nguid": "77CC67C2F2DA4DCA961C0BF7ADFA8FC0", 00:21:13.150 "uuid": "77cc67c2-f2da-4dca-961c-0bf7adfa8fc0" 00:21:13.150 }, 00:21:13.150 { 00:21:13.150 "nsid": 2, 00:21:13.150 "bdev_name": "Malloc1", 00:21:13.150 "name": "Malloc1", 00:21:13.150 "nguid": "1441A8B1EE7F41D99107B5387D692591", 00:21:13.150 "uuid": "1441a8b1-ee7f-41d9-9107-b5387d692591" 00:21:13.150 } 00:21:13.150 ] 00:21:13.150 } 00:21:13.150 ] 00:21:13.150 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.150 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 727086 00:21:13.150 Asynchronous Event Request test 00:21:13.150 Attaching to 10.0.0.2 00:21:13.150 Attached to 10.0.0.2 00:21:13.150 Registering asynchronous event callbacks... 00:21:13.150 Starting namespace attribute notice tests for all controllers... 00:21:13.150 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:21:13.150 aer_cb - Changed Namespace 00:21:13.150 Cleaning up... 00:21:13.150 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:21:13.150 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.150 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:13.150 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.150 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:21:13.150 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.150 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:13.150 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.150 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:13.150 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.150 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:13.408 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.408 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:21:13.408 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:21:13.408 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:13.408 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:21:13.408 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:13.408 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:21:13.408 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:13.408 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:13.408 rmmod nvme_tcp 00:21:13.408 rmmod nvme_fabrics 00:21:13.408 rmmod nvme_keyring 00:21:13.408 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:13.408 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:21:13.408 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:21:13.408 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 727050 ']' 00:21:13.408 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 727050 00:21:13.408 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 727050 ']' 00:21:13.408 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 727050 00:21:13.408 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:21:13.408 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:13.408 14:57:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 727050 00:21:13.408 14:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:13.408 14:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:13.408 14:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 727050' 00:21:13.408 killing process with pid 727050 00:21:13.408 14:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 727050 00:21:13.408 14:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 727050 00:21:13.667 14:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:13.667 14:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:13.667 14:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:13.667 14:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:21:13.667 14:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:21:13.667 14:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:13.667 14:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:21:13.667 14:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:13.667 14:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:13.667 14:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:13.667 14:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:13.667 14:57:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:15.576 14:57:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:15.576 00:21:15.576 real 0m5.597s 00:21:15.576 user 0m4.389s 00:21:15.576 sys 0m2.043s 00:21:15.576 14:57:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:15.576 14:57:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:15.576 ************************************ 00:21:15.576 END TEST nvmf_aer 00:21:15.576 ************************************ 00:21:15.576 14:57:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:15.576 14:57:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:15.576 14:57:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:15.576 14:57:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.576 ************************************ 00:21:15.576 START TEST nvmf_async_init 00:21:15.576 ************************************ 00:21:15.576 14:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:15.835 * Looking for test storage... 00:21:15.835 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:15.835 14:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:15.835 14:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lcov --version 00:21:15.835 14:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:15.835 14:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:15.835 14:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:15.835 14:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:15.835 14:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:15.835 14:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:21:15.835 14:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:21:15.835 14:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:21:15.835 14:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:21:15.835 14:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:21:15.835 14:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:21:15.835 14:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:21:15.835 14:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:15.835 14:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:21:15.835 14:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:21:15.835 14:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:15.835 14:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:15.835 14:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:21:15.836 14:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:21:15.836 14:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:15.836 14:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:21:15.836 14:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:21:15.836 14:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:21:15.836 14:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:21:15.836 14:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:15.836 14:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:21:15.836 14:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:21:15.836 14:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:15.836 14:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:15.836 14:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:21:15.836 14:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:15.836 14:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:15.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:15.836 --rc genhtml_branch_coverage=1 00:21:15.836 --rc genhtml_function_coverage=1 00:21:15.836 --rc genhtml_legend=1 00:21:15.836 --rc geninfo_all_blocks=1 00:21:15.836 --rc geninfo_unexecuted_blocks=1 00:21:15.836 00:21:15.836 ' 00:21:15.836 14:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:15.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:15.836 --rc genhtml_branch_coverage=1 00:21:15.836 --rc genhtml_function_coverage=1 00:21:15.836 --rc genhtml_legend=1 00:21:15.836 --rc geninfo_all_blocks=1 00:21:15.836 --rc geninfo_unexecuted_blocks=1 00:21:15.836 00:21:15.836 ' 00:21:15.836 14:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:15.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:15.836 --rc genhtml_branch_coverage=1 00:21:15.836 --rc genhtml_function_coverage=1 00:21:15.836 --rc genhtml_legend=1 00:21:15.836 --rc geninfo_all_blocks=1 00:21:15.836 --rc geninfo_unexecuted_blocks=1 00:21:15.836 00:21:15.836 ' 00:21:15.836 14:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:15.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:15.836 --rc genhtml_branch_coverage=1 00:21:15.836 --rc genhtml_function_coverage=1 00:21:15.836 --rc genhtml_legend=1 00:21:15.836 --rc geninfo_all_blocks=1 00:21:15.836 --rc geninfo_unexecuted_blocks=1 00:21:15.836 00:21:15.836 ' 00:21:15.836 14:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:15.836 14:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:21:15.836 14:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:15.836 14:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:15.836 14:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:15.836 14:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:15.836 14:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:15.836 14:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:15.836 14:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:15.836 14:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:15.836 14:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:15.836 14:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:15.836 14:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:15.836 14:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:15.836 14:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:15.836 14:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:15.836 14:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:15.836 14:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:15.836 14:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:15.836 14:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:21:15.836 14:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:15.836 14:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:15.836 14:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:15.836 14:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.836 14:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.836 14:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.836 14:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:21:15.836 14:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.836 14:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:21:15.836 14:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:15.836 14:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:15.836 14:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:15.836 14:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:15.836 14:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:15.836 14:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:15.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:15.836 14:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:15.836 14:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:15.836 14:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:15.836 14:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:21:15.836 14:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:21:15.836 14:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:21:15.836 14:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:21:15.836 14:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:21:15.836 14:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:21:15.836 14:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=1bf4d487f394463e8e8ce3c0134d18f8 00:21:15.836 14:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:21:15.836 14:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:15.836 14:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:15.836 14:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:15.836 14:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:15.836 14:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:15.836 14:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:15.836 14:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:15.836 14:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:15.836 14:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:15.836 14:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:15.836 14:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:21:15.836 14:57:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:18.370 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:18.370 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:21:18.370 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:18.370 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:18.370 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:18.370 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:18.370 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:18.370 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:21:18.370 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:18.370 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:21:18.370 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:21:18.370 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:21:18.370 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:21:18.370 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:21:18.370 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:21:18.370 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:18.370 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:18.370 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:18.370 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:18.370 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:18.370 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:18.370 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:18.370 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:18.370 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:18.370 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:18.370 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:18.370 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:18.370 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:18.370 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:18.370 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:18.370 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:18.370 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:18.370 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:18.370 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:18.370 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:18.370 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:18.370 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:18.370 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:18.370 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:18.370 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:18.370 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:18.370 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:18.370 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:18.370 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:18.370 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:18.370 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:18.370 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:18.370 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:18.370 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:18.370 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:18.370 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:18.370 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:18.370 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:18.370 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:18.370 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:18.370 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:18.370 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:18.370 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:18.370 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:18.370 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:18.370 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:18.370 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:18.370 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:18.370 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:18.370 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:18.370 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:18.370 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:18.370 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:18.370 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:18.370 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:18.370 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:18.370 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:18.370 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:18.370 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:21:18.370 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:18.370 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:18.370 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:18.370 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:18.370 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:18.370 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:18.370 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:18.370 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:18.370 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:18.370 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:18.370 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:18.370 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:18.370 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:18.370 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:18.370 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:18.370 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:18.370 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:18.370 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:18.370 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:18.370 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:18.370 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:18.370 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:18.370 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:18.370 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:18.370 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:18.370 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:18.370 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:18.370 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.332 ms 00:21:18.370 00:21:18.370 --- 10.0.0.2 ping statistics --- 00:21:18.370 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:18.370 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:21:18.370 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:18.370 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:18.371 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.159 ms 00:21:18.371 00:21:18.371 --- 10.0.0.1 ping statistics --- 00:21:18.371 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:18.371 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:21:18.371 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:18.371 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:21:18.371 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:18.371 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:18.371 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:18.371 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:18.371 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:18.371 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:18.371 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:18.371 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:21:18.371 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:18.371 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:18.371 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:18.371 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=729147 00:21:18.371 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:18.371 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 729147 00:21:18.371 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 729147 ']' 00:21:18.371 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:18.371 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:18.371 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:18.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:18.371 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:18.371 14:58:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:18.371 [2024-12-11 14:58:00.904466] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:21:18.371 [2024-12-11 14:58:00.904564] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:18.371 [2024-12-11 14:58:00.978180] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:18.371 [2024-12-11 14:58:01.033923] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:18.371 [2024-12-11 14:58:01.033975] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:18.371 [2024-12-11 14:58:01.034002] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:18.371 [2024-12-11 14:58:01.034013] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:18.371 [2024-12-11 14:58:01.034023] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:18.371 [2024-12-11 14:58:01.034643] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:21:18.630 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:18.630 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:21:18.630 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:18.630 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:18.630 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:18.630 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:18.630 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:21:18.630 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.630 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:18.630 [2024-12-11 14:58:01.169224] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:18.630 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.630 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:21:18.630 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.630 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:18.630 null0 00:21:18.630 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.630 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:21:18.630 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.630 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:18.630 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.630 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:21:18.630 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.630 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:18.630 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.630 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 1bf4d487f394463e8e8ce3c0134d18f8 00:21:18.630 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.630 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:18.630 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.630 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:18.630 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.630 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:18.630 [2024-12-11 14:58:01.209480] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:18.630 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.630 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:21:18.630 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.630 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:18.890 nvme0n1 00:21:18.890 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.890 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:18.890 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.890 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:18.890 [ 00:21:18.890 { 00:21:18.890 "name": "nvme0n1", 00:21:18.890 "aliases": [ 00:21:18.890 "1bf4d487-f394-463e-8e8c-e3c0134d18f8" 00:21:18.890 ], 00:21:18.890 "product_name": "NVMe disk", 00:21:18.890 "block_size": 512, 00:21:18.890 "num_blocks": 2097152, 00:21:18.891 "uuid": "1bf4d487-f394-463e-8e8c-e3c0134d18f8", 00:21:18.891 "numa_id": 0, 00:21:18.891 "assigned_rate_limits": { 00:21:18.891 "rw_ios_per_sec": 0, 00:21:18.891 "rw_mbytes_per_sec": 0, 00:21:18.891 "r_mbytes_per_sec": 0, 00:21:18.891 "w_mbytes_per_sec": 0 00:21:18.891 }, 00:21:18.891 "claimed": false, 00:21:18.891 "zoned": false, 00:21:18.891 "supported_io_types": { 00:21:18.891 "read": true, 00:21:18.891 "write": true, 00:21:18.891 "unmap": false, 00:21:18.891 "flush": true, 00:21:18.891 "reset": true, 00:21:18.891 "nvme_admin": true, 00:21:18.891 "nvme_io": true, 00:21:18.891 "nvme_io_md": false, 00:21:18.891 "write_zeroes": true, 00:21:18.891 "zcopy": false, 00:21:18.891 "get_zone_info": false, 00:21:18.891 "zone_management": false, 00:21:18.891 "zone_append": false, 00:21:18.891 "compare": true, 00:21:18.891 "compare_and_write": true, 00:21:18.891 "abort": true, 00:21:18.891 "seek_hole": false, 00:21:18.891 "seek_data": false, 00:21:18.891 "copy": true, 00:21:18.891 "nvme_iov_md": false 00:21:18.891 }, 00:21:18.891 "memory_domains": [ 00:21:18.891 { 00:21:18.891 "dma_device_id": "system", 00:21:18.891 "dma_device_type": 1 00:21:18.891 } 00:21:18.891 ], 00:21:18.891 "driver_specific": { 00:21:18.891 "nvme": [ 00:21:18.891 { 00:21:18.891 "trid": { 00:21:18.891 "trtype": "TCP", 00:21:18.891 "adrfam": "IPv4", 00:21:18.891 "traddr": "10.0.0.2", 00:21:18.891 "trsvcid": "4420", 00:21:18.891 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:18.891 }, 00:21:18.891 "ctrlr_data": { 00:21:18.891 "cntlid": 1, 00:21:18.891 "vendor_id": "0x8086", 00:21:18.891 "model_number": "SPDK bdev Controller", 00:21:18.891 "serial_number": "00000000000000000000", 00:21:18.891 "firmware_revision": "25.01", 00:21:18.891 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:18.891 "oacs": { 00:21:18.891 "security": 0, 00:21:18.891 "format": 0, 00:21:18.891 "firmware": 0, 00:21:18.891 "ns_manage": 0 00:21:18.891 }, 00:21:18.891 "multi_ctrlr": true, 00:21:18.891 "ana_reporting": false 00:21:18.891 }, 00:21:18.891 "vs": { 00:21:18.891 "nvme_version": "1.3" 00:21:18.891 }, 00:21:18.891 "ns_data": { 00:21:18.891 "id": 1, 00:21:18.891 "can_share": true 00:21:18.891 } 00:21:18.891 } 00:21:18.891 ], 00:21:18.891 "mp_policy": "active_passive" 00:21:18.891 } 00:21:18.891 } 00:21:18.891 ] 00:21:18.891 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.891 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:21:18.891 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.891 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:18.891 [2024-12-11 14:58:01.458002] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:18.891 [2024-12-11 14:58:01.458092] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a06700 (9): Bad file descriptor 00:21:18.891 [2024-12-11 14:58:01.590693] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:21:18.891 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.891 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:18.891 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.891 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:18.891 [ 00:21:18.891 { 00:21:18.891 "name": "nvme0n1", 00:21:18.891 "aliases": [ 00:21:18.891 "1bf4d487-f394-463e-8e8c-e3c0134d18f8" 00:21:18.891 ], 00:21:18.891 "product_name": "NVMe disk", 00:21:18.891 "block_size": 512, 00:21:18.891 "num_blocks": 2097152, 00:21:18.891 "uuid": "1bf4d487-f394-463e-8e8c-e3c0134d18f8", 00:21:18.891 "numa_id": 0, 00:21:18.891 "assigned_rate_limits": { 00:21:18.891 "rw_ios_per_sec": 0, 00:21:18.891 "rw_mbytes_per_sec": 0, 00:21:18.891 "r_mbytes_per_sec": 0, 00:21:18.891 "w_mbytes_per_sec": 0 00:21:18.891 }, 00:21:18.891 "claimed": false, 00:21:18.891 "zoned": false, 00:21:18.891 "supported_io_types": { 00:21:18.891 "read": true, 00:21:18.891 "write": true, 00:21:18.891 "unmap": false, 00:21:18.891 "flush": true, 00:21:18.891 "reset": true, 00:21:18.891 "nvme_admin": true, 00:21:18.891 "nvme_io": true, 00:21:18.891 "nvme_io_md": false, 00:21:18.891 "write_zeroes": true, 00:21:18.891 "zcopy": false, 00:21:18.891 "get_zone_info": false, 00:21:18.891 "zone_management": false, 00:21:18.891 "zone_append": false, 00:21:18.891 "compare": true, 00:21:18.891 "compare_and_write": true, 00:21:18.891 "abort": true, 00:21:18.891 "seek_hole": false, 00:21:18.891 "seek_data": false, 00:21:18.891 "copy": true, 00:21:18.891 "nvme_iov_md": false 00:21:18.891 }, 00:21:18.891 "memory_domains": [ 00:21:18.891 { 00:21:18.891 "dma_device_id": "system", 00:21:18.891 "dma_device_type": 1 00:21:18.891 } 00:21:18.891 ], 00:21:18.891 "driver_specific": { 00:21:18.891 "nvme": [ 00:21:18.891 { 00:21:18.891 "trid": { 00:21:18.891 "trtype": "TCP", 00:21:18.891 "adrfam": "IPv4", 00:21:18.891 "traddr": "10.0.0.2", 00:21:18.891 "trsvcid": "4420", 00:21:18.891 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:18.891 }, 00:21:18.891 "ctrlr_data": { 00:21:18.891 "cntlid": 2, 00:21:18.891 "vendor_id": "0x8086", 00:21:18.891 "model_number": "SPDK bdev Controller", 00:21:18.891 "serial_number": "00000000000000000000", 00:21:18.891 "firmware_revision": "25.01", 00:21:18.891 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:18.891 "oacs": { 00:21:18.891 "security": 0, 00:21:18.891 "format": 0, 00:21:18.891 "firmware": 0, 00:21:18.891 "ns_manage": 0 00:21:18.891 }, 00:21:18.891 "multi_ctrlr": true, 00:21:18.891 "ana_reporting": false 00:21:18.891 }, 00:21:18.891 "vs": { 00:21:18.891 "nvme_version": "1.3" 00:21:18.891 }, 00:21:18.891 "ns_data": { 00:21:18.891 "id": 1, 00:21:18.891 "can_share": true 00:21:18.891 } 00:21:18.891 } 00:21:18.891 ], 00:21:18.891 "mp_policy": "active_passive" 00:21:18.891 } 00:21:18.891 } 00:21:18.891 ] 00:21:18.891 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.891 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:18.891 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.891 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:18.891 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.891 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:21:18.891 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.zePOgp5QVo 00:21:18.891 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:18.891 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.zePOgp5QVo 00:21:18.891 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.zePOgp5QVo 00:21:18.891 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.891 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:18.891 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.891 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:21:18.891 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.891 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:18.891 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.891 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:21:18.891 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.891 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:18.891 [2024-12-11 14:58:01.646615] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:18.891 [2024-12-11 14:58:01.646737] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:18.891 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.891 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:21:18.891 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.891 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:18.891 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.891 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:18.891 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.891 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:19.152 [2024-12-11 14:58:01.662669] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:19.152 nvme0n1 00:21:19.152 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.152 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:19.152 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.152 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:19.152 [ 00:21:19.152 { 00:21:19.152 "name": "nvme0n1", 00:21:19.152 "aliases": [ 00:21:19.152 "1bf4d487-f394-463e-8e8c-e3c0134d18f8" 00:21:19.152 ], 00:21:19.152 "product_name": "NVMe disk", 00:21:19.152 "block_size": 512, 00:21:19.152 "num_blocks": 2097152, 00:21:19.152 "uuid": "1bf4d487-f394-463e-8e8c-e3c0134d18f8", 00:21:19.152 "numa_id": 0, 00:21:19.152 "assigned_rate_limits": { 00:21:19.152 "rw_ios_per_sec": 0, 00:21:19.152 "rw_mbytes_per_sec": 0, 00:21:19.152 "r_mbytes_per_sec": 0, 00:21:19.152 "w_mbytes_per_sec": 0 00:21:19.152 }, 00:21:19.152 "claimed": false, 00:21:19.152 "zoned": false, 00:21:19.152 "supported_io_types": { 00:21:19.152 "read": true, 00:21:19.152 "write": true, 00:21:19.152 "unmap": false, 00:21:19.152 "flush": true, 00:21:19.152 "reset": true, 00:21:19.152 "nvme_admin": true, 00:21:19.152 "nvme_io": true, 00:21:19.152 "nvme_io_md": false, 00:21:19.152 "write_zeroes": true, 00:21:19.152 "zcopy": false, 00:21:19.152 "get_zone_info": false, 00:21:19.152 "zone_management": false, 00:21:19.152 "zone_append": false, 00:21:19.152 "compare": true, 00:21:19.152 "compare_and_write": true, 00:21:19.152 "abort": true, 00:21:19.152 "seek_hole": false, 00:21:19.152 "seek_data": false, 00:21:19.152 "copy": true, 00:21:19.152 "nvme_iov_md": false 00:21:19.152 }, 00:21:19.152 "memory_domains": [ 00:21:19.152 { 00:21:19.152 "dma_device_id": "system", 00:21:19.152 "dma_device_type": 1 00:21:19.152 } 00:21:19.152 ], 00:21:19.152 "driver_specific": { 00:21:19.152 "nvme": [ 00:21:19.152 { 00:21:19.152 "trid": { 00:21:19.152 "trtype": "TCP", 00:21:19.152 "adrfam": "IPv4", 00:21:19.152 "traddr": "10.0.0.2", 00:21:19.152 "trsvcid": "4421", 00:21:19.152 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:19.152 }, 00:21:19.152 "ctrlr_data": { 00:21:19.152 "cntlid": 3, 00:21:19.152 "vendor_id": "0x8086", 00:21:19.152 "model_number": "SPDK bdev Controller", 00:21:19.152 "serial_number": "00000000000000000000", 00:21:19.152 "firmware_revision": "25.01", 00:21:19.152 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:19.152 "oacs": { 00:21:19.152 "security": 0, 00:21:19.152 "format": 0, 00:21:19.152 "firmware": 0, 00:21:19.152 "ns_manage": 0 00:21:19.152 }, 00:21:19.152 "multi_ctrlr": true, 00:21:19.152 "ana_reporting": false 00:21:19.152 }, 00:21:19.152 "vs": { 00:21:19.152 "nvme_version": "1.3" 00:21:19.152 }, 00:21:19.152 "ns_data": { 00:21:19.152 "id": 1, 00:21:19.152 "can_share": true 00:21:19.152 } 00:21:19.152 } 00:21:19.152 ], 00:21:19.152 "mp_policy": "active_passive" 00:21:19.152 } 00:21:19.152 } 00:21:19.152 ] 00:21:19.152 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.152 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:19.152 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.152 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:19.152 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.152 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.zePOgp5QVo 00:21:19.152 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:21:19.152 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:21:19.152 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:19.152 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:21:19.152 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:19.152 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:21:19.152 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:19.152 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:19.152 rmmod nvme_tcp 00:21:19.152 rmmod nvme_fabrics 00:21:19.152 rmmod nvme_keyring 00:21:19.152 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:19.152 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:21:19.152 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:21:19.152 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 729147 ']' 00:21:19.152 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 729147 00:21:19.152 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 729147 ']' 00:21:19.152 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 729147 00:21:19.152 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:21:19.152 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:19.152 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 729147 00:21:19.152 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:19.152 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:19.152 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 729147' 00:21:19.152 killing process with pid 729147 00:21:19.152 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 729147 00:21:19.152 14:58:01 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 729147 00:21:19.411 14:58:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:19.411 14:58:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:19.411 14:58:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:19.411 14:58:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:21:19.411 14:58:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:21:19.411 14:58:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:19.412 14:58:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:21:19.412 14:58:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:19.412 14:58:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:19.412 14:58:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:19.412 14:58:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:19.412 14:58:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:21.954 14:58:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:21.954 00:21:21.954 real 0m5.804s 00:21:21.954 user 0m2.243s 00:21:21.954 sys 0m1.986s 00:21:21.954 14:58:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:21.954 14:58:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:21.954 ************************************ 00:21:21.954 END TEST nvmf_async_init 00:21:21.954 ************************************ 00:21:21.954 14:58:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:21.954 14:58:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:21.954 14:58:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:21.954 14:58:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:21.954 ************************************ 00:21:21.954 START TEST dma 00:21:21.954 ************************************ 00:21:21.954 14:58:04 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:21.954 * Looking for test storage... 00:21:21.954 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:21.954 14:58:04 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:21.954 14:58:04 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lcov --version 00:21:21.954 14:58:04 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:21.954 14:58:04 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:21.954 14:58:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:21.954 14:58:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:21.954 14:58:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:21.954 14:58:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:21:21.954 14:58:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:21:21.954 14:58:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:21:21.954 14:58:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:21:21.954 14:58:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:21:21.954 14:58:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:21:21.954 14:58:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:21:21.954 14:58:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:21.954 14:58:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:21:21.954 14:58:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:21:21.954 14:58:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:21.954 14:58:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:21.954 14:58:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:21:21.954 14:58:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:21:21.954 14:58:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:21.954 14:58:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:21:21.954 14:58:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:21:21.954 14:58:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:21:21.954 14:58:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:21:21.954 14:58:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:21.954 14:58:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:21:21.954 14:58:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:21:21.954 14:58:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:21.955 14:58:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:21.955 14:58:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:21:21.955 14:58:04 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:21.955 14:58:04 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:21.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:21.955 --rc genhtml_branch_coverage=1 00:21:21.955 --rc genhtml_function_coverage=1 00:21:21.955 --rc genhtml_legend=1 00:21:21.955 --rc geninfo_all_blocks=1 00:21:21.955 --rc geninfo_unexecuted_blocks=1 00:21:21.955 00:21:21.955 ' 00:21:21.955 14:58:04 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:21.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:21.955 --rc genhtml_branch_coverage=1 00:21:21.955 --rc genhtml_function_coverage=1 00:21:21.955 --rc genhtml_legend=1 00:21:21.955 --rc geninfo_all_blocks=1 00:21:21.955 --rc geninfo_unexecuted_blocks=1 00:21:21.955 00:21:21.955 ' 00:21:21.955 14:58:04 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:21.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:21.955 --rc genhtml_branch_coverage=1 00:21:21.955 --rc genhtml_function_coverage=1 00:21:21.955 --rc genhtml_legend=1 00:21:21.955 --rc geninfo_all_blocks=1 00:21:21.955 --rc geninfo_unexecuted_blocks=1 00:21:21.955 00:21:21.955 ' 00:21:21.955 14:58:04 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:21.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:21.955 --rc genhtml_branch_coverage=1 00:21:21.955 --rc genhtml_function_coverage=1 00:21:21.955 --rc genhtml_legend=1 00:21:21.955 --rc geninfo_all_blocks=1 00:21:21.955 --rc geninfo_unexecuted_blocks=1 00:21:21.955 00:21:21.955 ' 00:21:21.955 14:58:04 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:21.955 14:58:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:21:21.955 14:58:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:21.955 14:58:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:21.955 14:58:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:21.955 14:58:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:21.955 14:58:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:21.955 14:58:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:21.955 14:58:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:21.955 14:58:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:21.955 14:58:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:21.955 14:58:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:21.955 14:58:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:21.955 14:58:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:21.955 14:58:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:21.955 14:58:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:21.955 14:58:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:21.955 14:58:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:21.955 14:58:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:21.955 14:58:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:21:21.955 14:58:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:21.955 14:58:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:21.955 14:58:04 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:21.955 14:58:04 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.955 14:58:04 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.955 14:58:04 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.955 14:58:04 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:21:21.955 14:58:04 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.955 14:58:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:21:21.955 14:58:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:21.955 14:58:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:21.955 14:58:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:21.955 14:58:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:21.955 14:58:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:21.955 14:58:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:21.955 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:21.955 14:58:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:21.955 14:58:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:21.955 14:58:04 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:21.955 14:58:04 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:21:21.955 14:58:04 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:21:21.955 00:21:21.955 real 0m0.153s 00:21:21.955 user 0m0.098s 00:21:21.955 sys 0m0.064s 00:21:21.955 14:58:04 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:21.955 14:58:04 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:21:21.955 ************************************ 00:21:21.955 END TEST dma 00:21:21.955 ************************************ 00:21:21.955 14:58:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:21.955 14:58:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:21.955 14:58:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:21.955 14:58:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:21.955 ************************************ 00:21:21.955 START TEST nvmf_identify 00:21:21.955 ************************************ 00:21:21.955 14:58:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:21.955 * Looking for test storage... 00:21:21.955 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:21.955 14:58:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:21.955 14:58:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:21:21.955 14:58:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:21.955 14:58:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:21.955 14:58:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:21.955 14:58:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:21.955 14:58:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:21.955 14:58:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:21:21.955 14:58:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:21:21.955 14:58:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:21:21.955 14:58:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:21:21.955 14:58:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:21:21.955 14:58:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:21:21.955 14:58:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:21:21.955 14:58:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:21.955 14:58:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:21:21.955 14:58:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:21:21.955 14:58:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:21.955 14:58:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:21.955 14:58:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:21:21.955 14:58:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:21:21.955 14:58:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:21.955 14:58:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:21:21.955 14:58:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:21:21.955 14:58:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:21:21.955 14:58:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:21:21.955 14:58:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:21.955 14:58:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:21:21.955 14:58:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:21:21.955 14:58:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:21.956 14:58:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:21.956 14:58:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:21:21.956 14:58:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:21.956 14:58:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:21.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:21.956 --rc genhtml_branch_coverage=1 00:21:21.956 --rc genhtml_function_coverage=1 00:21:21.956 --rc genhtml_legend=1 00:21:21.956 --rc geninfo_all_blocks=1 00:21:21.956 --rc geninfo_unexecuted_blocks=1 00:21:21.956 00:21:21.956 ' 00:21:21.956 14:58:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:21.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:21.956 --rc genhtml_branch_coverage=1 00:21:21.956 --rc genhtml_function_coverage=1 00:21:21.956 --rc genhtml_legend=1 00:21:21.956 --rc geninfo_all_blocks=1 00:21:21.956 --rc geninfo_unexecuted_blocks=1 00:21:21.956 00:21:21.956 ' 00:21:21.956 14:58:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:21.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:21.956 --rc genhtml_branch_coverage=1 00:21:21.956 --rc genhtml_function_coverage=1 00:21:21.956 --rc genhtml_legend=1 00:21:21.956 --rc geninfo_all_blocks=1 00:21:21.956 --rc geninfo_unexecuted_blocks=1 00:21:21.956 00:21:21.956 ' 00:21:21.956 14:58:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:21.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:21.956 --rc genhtml_branch_coverage=1 00:21:21.956 --rc genhtml_function_coverage=1 00:21:21.956 --rc genhtml_legend=1 00:21:21.956 --rc geninfo_all_blocks=1 00:21:21.956 --rc geninfo_unexecuted_blocks=1 00:21:21.956 00:21:21.956 ' 00:21:21.956 14:58:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:21.956 14:58:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:21:21.956 14:58:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:21.956 14:58:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:21.956 14:58:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:21.956 14:58:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:21.956 14:58:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:21.956 14:58:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:21.956 14:58:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:21.956 14:58:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:21.956 14:58:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:21.956 14:58:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:21.956 14:58:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:21.956 14:58:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:21.956 14:58:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:21.956 14:58:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:21.956 14:58:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:21.956 14:58:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:21.956 14:58:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:21.956 14:58:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:21:21.956 14:58:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:21.956 14:58:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:21.956 14:58:04 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:21.956 14:58:04 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.956 14:58:04 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.956 14:58:04 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.956 14:58:04 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:21:21.956 14:58:04 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.956 14:58:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:21:21.956 14:58:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:21.956 14:58:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:21.956 14:58:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:21.956 14:58:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:21.956 14:58:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:21.956 14:58:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:21.956 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:21.956 14:58:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:21.956 14:58:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:21.956 14:58:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:21.956 14:58:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:21.956 14:58:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:21.956 14:58:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:21:21.956 14:58:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:21.956 14:58:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:21.956 14:58:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:21.956 14:58:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:21.956 14:58:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:21.956 14:58:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:21.956 14:58:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:21.956 14:58:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:21.956 14:58:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:21.956 14:58:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:21.956 14:58:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:21:21.956 14:58:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:23.869 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:23.869 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:21:23.869 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:23.869 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:23.869 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:23.869 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:23.869 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:23.869 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:21:23.869 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:23.869 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:21:23.869 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:21:23.869 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:21:23.869 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:21:23.869 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:21:23.869 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:21:23.869 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:23.869 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:23.869 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:23.869 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:23.869 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:23.869 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:23.869 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:23.869 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:23.869 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:23.869 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:23.869 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:23.869 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:23.869 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:23.869 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:23.869 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:23.869 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:23.869 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:23.869 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:23.869 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:23.869 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:23.869 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:23.869 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:23.869 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:23.869 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:23.869 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:23.869 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:23.869 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:23.869 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:23.869 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:23.869 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:23.869 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:23.869 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:23.869 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:23.870 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:23.870 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:23.870 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:23.870 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:23.870 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:23.870 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:23.870 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:23.870 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:23.870 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:23.870 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:23.870 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:23.870 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:23.870 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:23.870 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:23.870 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:23.870 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:23.870 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:23.870 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:23.870 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:23.870 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:23.870 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:23.870 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:23.870 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:23.870 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:23.870 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:23.870 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:21:23.870 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:23.870 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:23.870 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:23.870 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:23.870 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:23.870 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:23.870 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:23.870 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:23.870 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:23.870 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:23.870 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:23.870 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:23.870 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:23.870 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:23.870 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:23.870 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:23.870 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:23.870 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:23.870 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:23.870 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:24.131 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:24.131 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:24.131 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:24.131 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:24.131 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:24.131 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:24.131 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:24.131 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.370 ms 00:21:24.131 00:21:24.131 --- 10.0.0.2 ping statistics --- 00:21:24.131 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:24.131 rtt min/avg/max/mdev = 0.370/0.370/0.370/0.000 ms 00:21:24.131 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:24.131 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:24.131 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:21:24.131 00:21:24.131 --- 10.0.0.1 ping statistics --- 00:21:24.131 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:24.131 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:21:24.131 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:24.131 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:21:24.131 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:24.131 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:24.131 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:24.131 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:24.131 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:24.131 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:24.131 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:24.131 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:21:24.131 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:24.131 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:24.131 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=731289 00:21:24.131 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:24.131 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:24.131 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 731289 00:21:24.131 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 731289 ']' 00:21:24.131 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:24.131 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:24.131 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:24.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:24.131 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:24.131 14:58:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:24.131 [2024-12-11 14:58:06.780106] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:21:24.131 [2024-12-11 14:58:06.780207] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:24.131 [2024-12-11 14:58:06.856613] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:24.390 [2024-12-11 14:58:06.918138] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:24.390 [2024-12-11 14:58:06.918195] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:24.390 [2024-12-11 14:58:06.918223] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:24.390 [2024-12-11 14:58:06.918234] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:24.390 [2024-12-11 14:58:06.918244] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:24.390 [2024-12-11 14:58:06.919999] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:21:24.390 [2024-12-11 14:58:06.920060] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:21:24.390 [2024-12-11 14:58:06.920085] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:21:24.390 [2024-12-11 14:58:06.920088] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:21:24.390 14:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:24.390 14:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:21:24.390 14:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:24.390 14:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.390 14:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:24.390 [2024-12-11 14:58:07.051066] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:24.390 14:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.390 14:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:21:24.390 14:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:24.390 14:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:24.390 14:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:24.390 14:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.390 14:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:24.390 Malloc0 00:21:24.390 14:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.390 14:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:24.390 14:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.390 14:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:24.390 14:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.390 14:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:21:24.390 14:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.390 14:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:24.390 14:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.390 14:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:24.390 14:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.390 14:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:24.390 [2024-12-11 14:58:07.141517] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:24.390 14:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.390 14:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:24.391 14:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.391 14:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:24.391 14:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.391 14:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:21:24.391 14:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.391 14:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:24.391 [ 00:21:24.391 { 00:21:24.391 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:24.391 "subtype": "Discovery", 00:21:24.391 "listen_addresses": [ 00:21:24.391 { 00:21:24.651 "trtype": "TCP", 00:21:24.652 "adrfam": "IPv4", 00:21:24.652 "traddr": "10.0.0.2", 00:21:24.652 "trsvcid": "4420" 00:21:24.652 } 00:21:24.652 ], 00:21:24.652 "allow_any_host": true, 00:21:24.652 "hosts": [] 00:21:24.652 }, 00:21:24.652 { 00:21:24.652 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:24.652 "subtype": "NVMe", 00:21:24.652 "listen_addresses": [ 00:21:24.652 { 00:21:24.652 "trtype": "TCP", 00:21:24.652 "adrfam": "IPv4", 00:21:24.652 "traddr": "10.0.0.2", 00:21:24.652 "trsvcid": "4420" 00:21:24.652 } 00:21:24.652 ], 00:21:24.652 "allow_any_host": true, 00:21:24.652 "hosts": [], 00:21:24.652 "serial_number": "SPDK00000000000001", 00:21:24.652 "model_number": "SPDK bdev Controller", 00:21:24.652 "max_namespaces": 32, 00:21:24.652 "min_cntlid": 1, 00:21:24.652 "max_cntlid": 65519, 00:21:24.652 "namespaces": [ 00:21:24.652 { 00:21:24.652 "nsid": 1, 00:21:24.652 "bdev_name": "Malloc0", 00:21:24.652 "name": "Malloc0", 00:21:24.652 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:21:24.652 "eui64": "ABCDEF0123456789", 00:21:24.652 "uuid": "92a5a73d-98af-4d9a-ad06-a7b686f7c94b" 00:21:24.652 } 00:21:24.652 ] 00:21:24.652 } 00:21:24.652 ] 00:21:24.652 14:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.652 14:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:21:24.652 [2024-12-11 14:58:07.183917] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:21:24.652 [2024-12-11 14:58:07.183964] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid731316 ] 00:21:24.652 [2024-12-11 14:58:07.231894] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:21:24.652 [2024-12-11 14:58:07.231975] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:24.652 [2024-12-11 14:58:07.231985] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:24.652 [2024-12-11 14:58:07.232002] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:24.652 [2024-12-11 14:58:07.232022] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:24.652 [2024-12-11 14:58:07.240004] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:21:24.652 [2024-12-11 14:58:07.240076] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1f19690 0 00:21:24.652 [2024-12-11 14:58:07.240324] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:24.652 [2024-12-11 14:58:07.240344] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:24.652 [2024-12-11 14:58:07.240352] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:24.652 [2024-12-11 14:58:07.240358] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:24.652 [2024-12-11 14:58:07.240405] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.652 [2024-12-11 14:58:07.240419] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.652 [2024-12-11 14:58:07.240427] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f19690) 00:21:24.652 [2024-12-11 14:58:07.240449] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:24.652 [2024-12-11 14:58:07.240475] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7b100, cid 0, qid 0 00:21:24.652 [2024-12-11 14:58:07.247564] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.652 [2024-12-11 14:58:07.247591] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.652 [2024-12-11 14:58:07.247607] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.652 [2024-12-11 14:58:07.247615] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7b100) on tqpair=0x1f19690 00:21:24.652 [2024-12-11 14:58:07.247637] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:24.652 [2024-12-11 14:58:07.247649] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:21:24.652 [2024-12-11 14:58:07.247659] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:21:24.652 [2024-12-11 14:58:07.247679] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.652 [2024-12-11 14:58:07.247688] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.652 [2024-12-11 14:58:07.247695] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f19690) 00:21:24.652 [2024-12-11 14:58:07.247705] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.652 [2024-12-11 14:58:07.247729] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7b100, cid 0, qid 0 00:21:24.652 [2024-12-11 14:58:07.247866] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.652 [2024-12-11 14:58:07.247880] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.652 [2024-12-11 14:58:07.247896] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.652 [2024-12-11 14:58:07.247902] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7b100) on tqpair=0x1f19690 00:21:24.652 [2024-12-11 14:58:07.247912] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:21:24.652 [2024-12-11 14:58:07.247925] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:21:24.652 [2024-12-11 14:58:07.247937] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.652 [2024-12-11 14:58:07.247945] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.652 [2024-12-11 14:58:07.247951] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f19690) 00:21:24.652 [2024-12-11 14:58:07.247962] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.652 [2024-12-11 14:58:07.247983] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7b100, cid 0, qid 0 00:21:24.652 [2024-12-11 14:58:07.248067] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.652 [2024-12-11 14:58:07.248079] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.652 [2024-12-11 14:58:07.248086] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.652 [2024-12-11 14:58:07.248093] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7b100) on tqpair=0x1f19690 00:21:24.652 [2024-12-11 14:58:07.248103] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:21:24.652 [2024-12-11 14:58:07.248117] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:21:24.652 [2024-12-11 14:58:07.248129] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.652 [2024-12-11 14:58:07.248137] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.652 [2024-12-11 14:58:07.248143] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f19690) 00:21:24.652 [2024-12-11 14:58:07.248153] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.652 [2024-12-11 14:58:07.248174] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7b100, cid 0, qid 0 00:21:24.652 [2024-12-11 14:58:07.248257] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.652 [2024-12-11 14:58:07.248270] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.652 [2024-12-11 14:58:07.248277] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.652 [2024-12-11 14:58:07.248284] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7b100) on tqpair=0x1f19690 00:21:24.652 [2024-12-11 14:58:07.248293] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:24.652 [2024-12-11 14:58:07.248310] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.652 [2024-12-11 14:58:07.248319] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.652 [2024-12-11 14:58:07.248325] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f19690) 00:21:24.652 [2024-12-11 14:58:07.248336] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.652 [2024-12-11 14:58:07.248356] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7b100, cid 0, qid 0 00:21:24.652 [2024-12-11 14:58:07.248438] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.652 [2024-12-11 14:58:07.248450] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.652 [2024-12-11 14:58:07.248457] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.652 [2024-12-11 14:58:07.248463] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7b100) on tqpair=0x1f19690 00:21:24.652 [2024-12-11 14:58:07.248472] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:21:24.652 [2024-12-11 14:58:07.248480] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:21:24.652 [2024-12-11 14:58:07.248493] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:24.652 [2024-12-11 14:58:07.248603] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:21:24.652 [2024-12-11 14:58:07.248614] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:24.652 [2024-12-11 14:58:07.248630] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.652 [2024-12-11 14:58:07.248638] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.652 [2024-12-11 14:58:07.248644] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f19690) 00:21:24.652 [2024-12-11 14:58:07.248658] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.652 [2024-12-11 14:58:07.248681] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7b100, cid 0, qid 0 00:21:24.652 [2024-12-11 14:58:07.248799] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.652 [2024-12-11 14:58:07.248812] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.652 [2024-12-11 14:58:07.248819] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.652 [2024-12-11 14:58:07.248826] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7b100) on tqpair=0x1f19690 00:21:24.652 [2024-12-11 14:58:07.248835] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:24.652 [2024-12-11 14:58:07.248855] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.652 [2024-12-11 14:58:07.248864] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.652 [2024-12-11 14:58:07.248870] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f19690) 00:21:24.653 [2024-12-11 14:58:07.248880] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.653 [2024-12-11 14:58:07.248901] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7b100, cid 0, qid 0 00:21:24.653 [2024-12-11 14:58:07.248979] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.653 [2024-12-11 14:58:07.248993] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.653 [2024-12-11 14:58:07.249000] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.653 [2024-12-11 14:58:07.249007] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7b100) on tqpair=0x1f19690 00:21:24.653 [2024-12-11 14:58:07.249014] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:24.653 [2024-12-11 14:58:07.249022] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:21:24.653 [2024-12-11 14:58:07.249036] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:21:24.653 [2024-12-11 14:58:07.249051] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:21:24.653 [2024-12-11 14:58:07.249069] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.653 [2024-12-11 14:58:07.249077] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f19690) 00:21:24.653 [2024-12-11 14:58:07.249087] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.653 [2024-12-11 14:58:07.249108] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7b100, cid 0, qid 0 00:21:24.653 [2024-12-11 14:58:07.249240] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:24.653 [2024-12-11 14:58:07.249252] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:24.653 [2024-12-11 14:58:07.249259] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:24.653 [2024-12-11 14:58:07.249266] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f19690): datao=0, datal=4096, cccid=0 00:21:24.653 [2024-12-11 14:58:07.249274] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f7b100) on tqpair(0x1f19690): expected_datao=0, payload_size=4096 00:21:24.653 [2024-12-11 14:58:07.249281] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.653 [2024-12-11 14:58:07.249299] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:24.653 [2024-12-11 14:58:07.249309] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:24.653 [2024-12-11 14:58:07.289670] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.653 [2024-12-11 14:58:07.289690] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.653 [2024-12-11 14:58:07.289698] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.653 [2024-12-11 14:58:07.289705] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7b100) on tqpair=0x1f19690 00:21:24.653 [2024-12-11 14:58:07.289720] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:21:24.653 [2024-12-11 14:58:07.289729] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:21:24.653 [2024-12-11 14:58:07.289737] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:21:24.653 [2024-12-11 14:58:07.289747] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:21:24.653 [2024-12-11 14:58:07.289755] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:21:24.653 [2024-12-11 14:58:07.289763] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:21:24.653 [2024-12-11 14:58:07.289784] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:21:24.653 [2024-12-11 14:58:07.289802] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.653 [2024-12-11 14:58:07.289811] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.653 [2024-12-11 14:58:07.289817] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f19690) 00:21:24.653 [2024-12-11 14:58:07.289829] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:24.653 [2024-12-11 14:58:07.289852] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7b100, cid 0, qid 0 00:21:24.653 [2024-12-11 14:58:07.289947] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.653 [2024-12-11 14:58:07.289959] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.653 [2024-12-11 14:58:07.289966] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.653 [2024-12-11 14:58:07.289973] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7b100) on tqpair=0x1f19690 00:21:24.653 [2024-12-11 14:58:07.289987] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.653 [2024-12-11 14:58:07.289995] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.653 [2024-12-11 14:58:07.290001] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f19690) 00:21:24.653 [2024-12-11 14:58:07.290010] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.653 [2024-12-11 14:58:07.290021] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.653 [2024-12-11 14:58:07.290028] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.653 [2024-12-11 14:58:07.290034] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1f19690) 00:21:24.653 [2024-12-11 14:58:07.290042] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.653 [2024-12-11 14:58:07.290052] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.653 [2024-12-11 14:58:07.290059] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.653 [2024-12-11 14:58:07.290066] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1f19690) 00:21:24.653 [2024-12-11 14:58:07.290074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.653 [2024-12-11 14:58:07.290084] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.653 [2024-12-11 14:58:07.290090] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.653 [2024-12-11 14:58:07.290103] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f19690) 00:21:24.653 [2024-12-11 14:58:07.290113] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.653 [2024-12-11 14:58:07.290122] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:21:24.653 [2024-12-11 14:58:07.290157] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:24.653 [2024-12-11 14:58:07.290171] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.653 [2024-12-11 14:58:07.290178] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f19690) 00:21:24.653 [2024-12-11 14:58:07.290188] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.653 [2024-12-11 14:58:07.290225] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7b100, cid 0, qid 0 00:21:24.653 [2024-12-11 14:58:07.290236] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7b280, cid 1, qid 0 00:21:24.653 [2024-12-11 14:58:07.290243] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7b400, cid 2, qid 0 00:21:24.653 [2024-12-11 14:58:07.290251] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7b580, cid 3, qid 0 00:21:24.653 [2024-12-11 14:58:07.290257] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7b700, cid 4, qid 0 00:21:24.653 [2024-12-11 14:58:07.290440] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.653 [2024-12-11 14:58:07.290454] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.653 [2024-12-11 14:58:07.290461] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.653 [2024-12-11 14:58:07.290468] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7b700) on tqpair=0x1f19690 00:21:24.653 [2024-12-11 14:58:07.290479] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:21:24.653 [2024-12-11 14:58:07.290488] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:21:24.653 [2024-12-11 14:58:07.290506] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.653 [2024-12-11 14:58:07.290515] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f19690) 00:21:24.653 [2024-12-11 14:58:07.290526] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.653 [2024-12-11 14:58:07.290557] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7b700, cid 4, qid 0 00:21:24.653 [2024-12-11 14:58:07.290715] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:24.653 [2024-12-11 14:58:07.290729] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:24.653 [2024-12-11 14:58:07.290737] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:24.653 [2024-12-11 14:58:07.290743] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f19690): datao=0, datal=4096, cccid=4 00:21:24.653 [2024-12-11 14:58:07.290750] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f7b700) on tqpair(0x1f19690): expected_datao=0, payload_size=4096 00:21:24.653 [2024-12-11 14:58:07.290757] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.653 [2024-12-11 14:58:07.290767] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:24.653 [2024-12-11 14:58:07.290775] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:24.653 [2024-12-11 14:58:07.290787] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.653 [2024-12-11 14:58:07.290796] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.653 [2024-12-11 14:58:07.290803] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.653 [2024-12-11 14:58:07.290814] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7b700) on tqpair=0x1f19690 00:21:24.653 [2024-12-11 14:58:07.290836] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:21:24.653 [2024-12-11 14:58:07.290882] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.653 [2024-12-11 14:58:07.290893] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f19690) 00:21:24.653 [2024-12-11 14:58:07.290904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.653 [2024-12-11 14:58:07.290916] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.653 [2024-12-11 14:58:07.290924] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.653 [2024-12-11 14:58:07.290930] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1f19690) 00:21:24.653 [2024-12-11 14:58:07.290939] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.653 [2024-12-11 14:58:07.290965] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7b700, cid 4, qid 0 00:21:24.653 [2024-12-11 14:58:07.290977] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7b880, cid 5, qid 0 00:21:24.654 [2024-12-11 14:58:07.291133] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:24.654 [2024-12-11 14:58:07.291145] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:24.654 [2024-12-11 14:58:07.291151] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:24.654 [2024-12-11 14:58:07.291158] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f19690): datao=0, datal=1024, cccid=4 00:21:24.654 [2024-12-11 14:58:07.291165] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f7b700) on tqpair(0x1f19690): expected_datao=0, payload_size=1024 00:21:24.654 [2024-12-11 14:58:07.291172] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.654 [2024-12-11 14:58:07.291182] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:24.654 [2024-12-11 14:58:07.291189] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:24.654 [2024-12-11 14:58:07.291197] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.654 [2024-12-11 14:58:07.291206] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.654 [2024-12-11 14:58:07.291212] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.654 [2024-12-11 14:58:07.291219] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7b880) on tqpair=0x1f19690 00:21:24.654 [2024-12-11 14:58:07.335557] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.654 [2024-12-11 14:58:07.335575] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.654 [2024-12-11 14:58:07.335582] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.654 [2024-12-11 14:58:07.335589] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7b700) on tqpair=0x1f19690 00:21:24.654 [2024-12-11 14:58:07.335607] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.654 [2024-12-11 14:58:07.335616] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f19690) 00:21:24.654 [2024-12-11 14:58:07.335627] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.654 [2024-12-11 14:58:07.335656] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7b700, cid 4, qid 0 00:21:24.654 [2024-12-11 14:58:07.335800] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:24.654 [2024-12-11 14:58:07.335812] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:24.654 [2024-12-11 14:58:07.335819] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:24.654 [2024-12-11 14:58:07.335825] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f19690): datao=0, datal=3072, cccid=4 00:21:24.654 [2024-12-11 14:58:07.335833] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f7b700) on tqpair(0x1f19690): expected_datao=0, payload_size=3072 00:21:24.654 [2024-12-11 14:58:07.335845] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.654 [2024-12-11 14:58:07.335865] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:24.654 [2024-12-11 14:58:07.335874] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:24.654 [2024-12-11 14:58:07.377636] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.654 [2024-12-11 14:58:07.377655] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.654 [2024-12-11 14:58:07.377663] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.654 [2024-12-11 14:58:07.377670] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7b700) on tqpair=0x1f19690 00:21:24.654 [2024-12-11 14:58:07.377686] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.654 [2024-12-11 14:58:07.377695] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f19690) 00:21:24.654 [2024-12-11 14:58:07.377706] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.654 [2024-12-11 14:58:07.377736] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7b700, cid 4, qid 0 00:21:24.654 [2024-12-11 14:58:07.377850] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:24.654 [2024-12-11 14:58:07.377864] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:24.654 [2024-12-11 14:58:07.377871] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:24.654 [2024-12-11 14:58:07.377877] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f19690): datao=0, datal=8, cccid=4 00:21:24.654 [2024-12-11 14:58:07.377884] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f7b700) on tqpair(0x1f19690): expected_datao=0, payload_size=8 00:21:24.654 [2024-12-11 14:58:07.377892] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.654 [2024-12-11 14:58:07.377901] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:24.654 [2024-12-11 14:58:07.377909] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:24.654 [2024-12-11 14:58:07.418648] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.654 [2024-12-11 14:58:07.418668] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.654 [2024-12-11 14:58:07.418675] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.654 [2024-12-11 14:58:07.418683] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7b700) on tqpair=0x1f19690 00:21:24.654 ===================================================== 00:21:24.654 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:21:24.654 ===================================================== 00:21:24.654 Controller Capabilities/Features 00:21:24.654 ================================ 00:21:24.654 Vendor ID: 0000 00:21:24.654 Subsystem Vendor ID: 0000 00:21:24.654 Serial Number: .................... 00:21:24.654 Model Number: ........................................ 00:21:24.654 Firmware Version: 25.01 00:21:24.654 Recommended Arb Burst: 0 00:21:24.654 IEEE OUI Identifier: 00 00 00 00:21:24.654 Multi-path I/O 00:21:24.654 May have multiple subsystem ports: No 00:21:24.654 May have multiple controllers: No 00:21:24.654 Associated with SR-IOV VF: No 00:21:24.654 Max Data Transfer Size: 131072 00:21:24.654 Max Number of Namespaces: 0 00:21:24.654 Max Number of I/O Queues: 1024 00:21:24.654 NVMe Specification Version (VS): 1.3 00:21:24.654 NVMe Specification Version (Identify): 1.3 00:21:24.654 Maximum Queue Entries: 128 00:21:24.654 Contiguous Queues Required: Yes 00:21:24.654 Arbitration Mechanisms Supported 00:21:24.654 Weighted Round Robin: Not Supported 00:21:24.654 Vendor Specific: Not Supported 00:21:24.654 Reset Timeout: 15000 ms 00:21:24.654 Doorbell Stride: 4 bytes 00:21:24.654 NVM Subsystem Reset: Not Supported 00:21:24.654 Command Sets Supported 00:21:24.654 NVM Command Set: Supported 00:21:24.654 Boot Partition: Not Supported 00:21:24.654 Memory Page Size Minimum: 4096 bytes 00:21:24.654 Memory Page Size Maximum: 4096 bytes 00:21:24.654 Persistent Memory Region: Not Supported 00:21:24.654 Optional Asynchronous Events Supported 00:21:24.654 Namespace Attribute Notices: Not Supported 00:21:24.654 Firmware Activation Notices: Not Supported 00:21:24.654 ANA Change Notices: Not Supported 00:21:24.654 PLE Aggregate Log Change Notices: Not Supported 00:21:24.654 LBA Status Info Alert Notices: Not Supported 00:21:24.654 EGE Aggregate Log Change Notices: Not Supported 00:21:24.654 Normal NVM Subsystem Shutdown event: Not Supported 00:21:24.654 Zone Descriptor Change Notices: Not Supported 00:21:24.654 Discovery Log Change Notices: Supported 00:21:24.654 Controller Attributes 00:21:24.654 128-bit Host Identifier: Not Supported 00:21:24.654 Non-Operational Permissive Mode: Not Supported 00:21:24.654 NVM Sets: Not Supported 00:21:24.654 Read Recovery Levels: Not Supported 00:21:24.654 Endurance Groups: Not Supported 00:21:24.654 Predictable Latency Mode: Not Supported 00:21:24.654 Traffic Based Keep ALive: Not Supported 00:21:24.654 Namespace Granularity: Not Supported 00:21:24.654 SQ Associations: Not Supported 00:21:24.654 UUID List: Not Supported 00:21:24.654 Multi-Domain Subsystem: Not Supported 00:21:24.654 Fixed Capacity Management: Not Supported 00:21:24.654 Variable Capacity Management: Not Supported 00:21:24.654 Delete Endurance Group: Not Supported 00:21:24.654 Delete NVM Set: Not Supported 00:21:24.654 Extended LBA Formats Supported: Not Supported 00:21:24.654 Flexible Data Placement Supported: Not Supported 00:21:24.654 00:21:24.654 Controller Memory Buffer Support 00:21:24.654 ================================ 00:21:24.654 Supported: No 00:21:24.654 00:21:24.654 Persistent Memory Region Support 00:21:24.654 ================================ 00:21:24.654 Supported: No 00:21:24.654 00:21:24.654 Admin Command Set Attributes 00:21:24.654 ============================ 00:21:24.654 Security Send/Receive: Not Supported 00:21:24.654 Format NVM: Not Supported 00:21:24.654 Firmware Activate/Download: Not Supported 00:21:24.654 Namespace Management: Not Supported 00:21:24.654 Device Self-Test: Not Supported 00:21:24.654 Directives: Not Supported 00:21:24.654 NVMe-MI: Not Supported 00:21:24.654 Virtualization Management: Not Supported 00:21:24.654 Doorbell Buffer Config: Not Supported 00:21:24.654 Get LBA Status Capability: Not Supported 00:21:24.654 Command & Feature Lockdown Capability: Not Supported 00:21:24.654 Abort Command Limit: 1 00:21:24.654 Async Event Request Limit: 4 00:21:24.654 Number of Firmware Slots: N/A 00:21:24.654 Firmware Slot 1 Read-Only: N/A 00:21:24.654 Firmware Activation Without Reset: N/A 00:21:24.654 Multiple Update Detection Support: N/A 00:21:24.654 Firmware Update Granularity: No Information Provided 00:21:24.654 Per-Namespace SMART Log: No 00:21:24.654 Asymmetric Namespace Access Log Page: Not Supported 00:21:24.654 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:21:24.654 Command Effects Log Page: Not Supported 00:21:24.654 Get Log Page Extended Data: Supported 00:21:24.654 Telemetry Log Pages: Not Supported 00:21:24.654 Persistent Event Log Pages: Not Supported 00:21:24.654 Supported Log Pages Log Page: May Support 00:21:24.654 Commands Supported & Effects Log Page: Not Supported 00:21:24.654 Feature Identifiers & Effects Log Page:May Support 00:21:24.654 NVMe-MI Commands & Effects Log Page: May Support 00:21:24.654 Data Area 4 for Telemetry Log: Not Supported 00:21:24.654 Error Log Page Entries Supported: 128 00:21:24.654 Keep Alive: Not Supported 00:21:24.654 00:21:24.654 NVM Command Set Attributes 00:21:24.654 ========================== 00:21:24.654 Submission Queue Entry Size 00:21:24.654 Max: 1 00:21:24.654 Min: 1 00:21:24.654 Completion Queue Entry Size 00:21:24.654 Max: 1 00:21:24.654 Min: 1 00:21:24.654 Number of Namespaces: 0 00:21:24.655 Compare Command: Not Supported 00:21:24.655 Write Uncorrectable Command: Not Supported 00:21:24.655 Dataset Management Command: Not Supported 00:21:24.655 Write Zeroes Command: Not Supported 00:21:24.655 Set Features Save Field: Not Supported 00:21:24.655 Reservations: Not Supported 00:21:24.655 Timestamp: Not Supported 00:21:24.655 Copy: Not Supported 00:21:24.655 Volatile Write Cache: Not Present 00:21:24.655 Atomic Write Unit (Normal): 1 00:21:24.655 Atomic Write Unit (PFail): 1 00:21:24.655 Atomic Compare & Write Unit: 1 00:21:24.655 Fused Compare & Write: Supported 00:21:24.655 Scatter-Gather List 00:21:24.655 SGL Command Set: Supported 00:21:24.655 SGL Keyed: Supported 00:21:24.655 SGL Bit Bucket Descriptor: Not Supported 00:21:24.655 SGL Metadata Pointer: Not Supported 00:21:24.655 Oversized SGL: Not Supported 00:21:24.655 SGL Metadata Address: Not Supported 00:21:24.655 SGL Offset: Supported 00:21:24.655 Transport SGL Data Block: Not Supported 00:21:24.655 Replay Protected Memory Block: Not Supported 00:21:24.655 00:21:24.655 Firmware Slot Information 00:21:24.655 ========================= 00:21:24.655 Active slot: 0 00:21:24.655 00:21:24.655 00:21:24.655 Error Log 00:21:24.655 ========= 00:21:24.655 00:21:24.655 Active Namespaces 00:21:24.655 ================= 00:21:24.655 Discovery Log Page 00:21:24.655 ================== 00:21:24.655 Generation Counter: 2 00:21:24.655 Number of Records: 2 00:21:24.655 Record Format: 0 00:21:24.655 00:21:24.655 Discovery Log Entry 0 00:21:24.655 ---------------------- 00:21:24.655 Transport Type: 3 (TCP) 00:21:24.655 Address Family: 1 (IPv4) 00:21:24.655 Subsystem Type: 3 (Current Discovery Subsystem) 00:21:24.655 Entry Flags: 00:21:24.655 Duplicate Returned Information: 1 00:21:24.655 Explicit Persistent Connection Support for Discovery: 1 00:21:24.655 Transport Requirements: 00:21:24.655 Secure Channel: Not Required 00:21:24.655 Port ID: 0 (0x0000) 00:21:24.655 Controller ID: 65535 (0xffff) 00:21:24.655 Admin Max SQ Size: 128 00:21:24.655 Transport Service Identifier: 4420 00:21:24.655 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:21:24.655 Transport Address: 10.0.0.2 00:21:24.655 Discovery Log Entry 1 00:21:24.655 ---------------------- 00:21:24.655 Transport Type: 3 (TCP) 00:21:24.655 Address Family: 1 (IPv4) 00:21:24.655 Subsystem Type: 2 (NVM Subsystem) 00:21:24.655 Entry Flags: 00:21:24.655 Duplicate Returned Information: 0 00:21:24.655 Explicit Persistent Connection Support for Discovery: 0 00:21:24.655 Transport Requirements: 00:21:24.655 Secure Channel: Not Required 00:21:24.655 Port ID: 0 (0x0000) 00:21:24.655 Controller ID: 65535 (0xffff) 00:21:24.655 Admin Max SQ Size: 128 00:21:24.655 Transport Service Identifier: 4420 00:21:24.655 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:21:24.655 Transport Address: 10.0.0.2 [2024-12-11 14:58:07.418834] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:21:24.655 [2024-12-11 14:58:07.418859] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7b100) on tqpair=0x1f19690 00:21:24.655 [2024-12-11 14:58:07.418873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.655 [2024-12-11 14:58:07.418882] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7b280) on tqpair=0x1f19690 00:21:24.655 [2024-12-11 14:58:07.418890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.655 [2024-12-11 14:58:07.418898] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7b400) on tqpair=0x1f19690 00:21:24.655 [2024-12-11 14:58:07.418905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.655 [2024-12-11 14:58:07.418913] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7b580) on tqpair=0x1f19690 00:21:24.655 [2024-12-11 14:58:07.418921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.655 [2024-12-11 14:58:07.418948] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.655 [2024-12-11 14:58:07.418956] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.655 [2024-12-11 14:58:07.418962] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f19690) 00:21:24.655 [2024-12-11 14:58:07.418976] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.655 [2024-12-11 14:58:07.419001] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7b580, cid 3, qid 0 00:21:24.655 [2024-12-11 14:58:07.419128] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.655 [2024-12-11 14:58:07.419141] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.655 [2024-12-11 14:58:07.419148] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.655 [2024-12-11 14:58:07.419155] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7b580) on tqpair=0x1f19690 00:21:24.655 [2024-12-11 14:58:07.419167] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.655 [2024-12-11 14:58:07.419175] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.655 [2024-12-11 14:58:07.419182] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f19690) 00:21:24.655 [2024-12-11 14:58:07.419193] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.655 [2024-12-11 14:58:07.419228] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7b580, cid 3, qid 0 00:21:24.655 [2024-12-11 14:58:07.419331] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.655 [2024-12-11 14:58:07.419343] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.655 [2024-12-11 14:58:07.419350] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.655 [2024-12-11 14:58:07.419357] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7b580) on tqpair=0x1f19690 00:21:24.655 [2024-12-11 14:58:07.419365] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:21:24.655 [2024-12-11 14:58:07.419373] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:21:24.655 [2024-12-11 14:58:07.419388] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.655 [2024-12-11 14:58:07.419397] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.655 [2024-12-11 14:58:07.419404] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f19690) 00:21:24.655 [2024-12-11 14:58:07.419414] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.655 [2024-12-11 14:58:07.419434] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7b580, cid 3, qid 0 00:21:24.655 [2024-12-11 14:58:07.419523] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.917 [2024-12-11 14:58:07.419538] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.917 [2024-12-11 14:58:07.423578] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.917 [2024-12-11 14:58:07.423598] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7b580) on tqpair=0x1f19690 00:21:24.917 [2024-12-11 14:58:07.423618] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.917 [2024-12-11 14:58:07.423627] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.917 [2024-12-11 14:58:07.423634] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f19690) 00:21:24.917 [2024-12-11 14:58:07.423644] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.917 [2024-12-11 14:58:07.423667] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f7b580, cid 3, qid 0 00:21:24.917 [2024-12-11 14:58:07.423788] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.917 [2024-12-11 14:58:07.423800] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.917 [2024-12-11 14:58:07.423807] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.917 [2024-12-11 14:58:07.423814] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f7b580) on tqpair=0x1f19690 00:21:24.917 [2024-12-11 14:58:07.423827] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 4 milliseconds 00:21:24.917 00:21:24.917 14:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:21:24.917 [2024-12-11 14:58:07.463646] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:21:24.917 [2024-12-11 14:58:07.463701] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid731366 ] 00:21:24.917 [2024-12-11 14:58:07.516210] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:21:24.917 [2024-12-11 14:58:07.516261] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:24.917 [2024-12-11 14:58:07.516271] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:24.917 [2024-12-11 14:58:07.516285] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:24.917 [2024-12-11 14:58:07.516297] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:24.917 [2024-12-11 14:58:07.519855] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:21:24.917 [2024-12-11 14:58:07.519896] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x865690 0 00:21:24.917 [2024-12-11 14:58:07.527562] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:24.917 [2024-12-11 14:58:07.527582] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:24.917 [2024-12-11 14:58:07.527590] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:24.917 [2024-12-11 14:58:07.527596] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:24.917 [2024-12-11 14:58:07.527640] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.917 [2024-12-11 14:58:07.527651] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.917 [2024-12-11 14:58:07.527658] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x865690) 00:21:24.917 [2024-12-11 14:58:07.527672] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:24.917 [2024-12-11 14:58:07.527699] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c7100, cid 0, qid 0 00:21:24.917 [2024-12-11 14:58:07.535561] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.917 [2024-12-11 14:58:07.535580] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.917 [2024-12-11 14:58:07.535588] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.917 [2024-12-11 14:58:07.535595] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8c7100) on tqpair=0x865690 00:21:24.917 [2024-12-11 14:58:07.535608] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:24.917 [2024-12-11 14:58:07.535619] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:21:24.917 [2024-12-11 14:58:07.535629] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:21:24.917 [2024-12-11 14:58:07.535646] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.917 [2024-12-11 14:58:07.535655] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.917 [2024-12-11 14:58:07.535661] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x865690) 00:21:24.917 [2024-12-11 14:58:07.535674] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.917 [2024-12-11 14:58:07.535703] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c7100, cid 0, qid 0 00:21:24.917 [2024-12-11 14:58:07.535801] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.917 [2024-12-11 14:58:07.535815] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.917 [2024-12-11 14:58:07.535823] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.917 [2024-12-11 14:58:07.535829] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8c7100) on tqpair=0x865690 00:21:24.917 [2024-12-11 14:58:07.535837] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:21:24.917 [2024-12-11 14:58:07.535851] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:21:24.917 [2024-12-11 14:58:07.535863] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.917 [2024-12-11 14:58:07.535871] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.917 [2024-12-11 14:58:07.535878] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x865690) 00:21:24.917 [2024-12-11 14:58:07.535888] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.917 [2024-12-11 14:58:07.535910] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c7100, cid 0, qid 0 00:21:24.917 [2024-12-11 14:58:07.535993] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.917 [2024-12-11 14:58:07.536007] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.917 [2024-12-11 14:58:07.536014] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.917 [2024-12-11 14:58:07.536020] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8c7100) on tqpair=0x865690 00:21:24.917 [2024-12-11 14:58:07.536029] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:21:24.917 [2024-12-11 14:58:07.536042] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:21:24.917 [2024-12-11 14:58:07.536055] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.917 [2024-12-11 14:58:07.536063] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.917 [2024-12-11 14:58:07.536069] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x865690) 00:21:24.917 [2024-12-11 14:58:07.536079] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.917 [2024-12-11 14:58:07.536101] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c7100, cid 0, qid 0 00:21:24.917 [2024-12-11 14:58:07.536186] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.917 [2024-12-11 14:58:07.536200] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.917 [2024-12-11 14:58:07.536207] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.917 [2024-12-11 14:58:07.536214] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8c7100) on tqpair=0x865690 00:21:24.917 [2024-12-11 14:58:07.536222] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:24.917 [2024-12-11 14:58:07.536239] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.917 [2024-12-11 14:58:07.536248] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.917 [2024-12-11 14:58:07.536255] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x865690) 00:21:24.917 [2024-12-11 14:58:07.536265] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.917 [2024-12-11 14:58:07.536286] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c7100, cid 0, qid 0 00:21:24.917 [2024-12-11 14:58:07.536364] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.918 [2024-12-11 14:58:07.536380] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.918 [2024-12-11 14:58:07.536388] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.918 [2024-12-11 14:58:07.536395] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8c7100) on tqpair=0x865690 00:21:24.918 [2024-12-11 14:58:07.536403] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:21:24.918 [2024-12-11 14:58:07.536411] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:21:24.918 [2024-12-11 14:58:07.536424] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:24.918 [2024-12-11 14:58:07.536534] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:21:24.918 [2024-12-11 14:58:07.536542] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:24.918 [2024-12-11 14:58:07.536565] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.918 [2024-12-11 14:58:07.536573] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.918 [2024-12-11 14:58:07.536580] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x865690) 00:21:24.918 [2024-12-11 14:58:07.536590] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.918 [2024-12-11 14:58:07.536612] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c7100, cid 0, qid 0 00:21:24.918 [2024-12-11 14:58:07.536742] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.918 [2024-12-11 14:58:07.536756] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.918 [2024-12-11 14:58:07.536763] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.918 [2024-12-11 14:58:07.536770] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8c7100) on tqpair=0x865690 00:21:24.918 [2024-12-11 14:58:07.536778] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:24.918 [2024-12-11 14:58:07.536794] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.918 [2024-12-11 14:58:07.536804] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.918 [2024-12-11 14:58:07.536810] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x865690) 00:21:24.918 [2024-12-11 14:58:07.536821] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.918 [2024-12-11 14:58:07.536842] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c7100, cid 0, qid 0 00:21:24.918 [2024-12-11 14:58:07.536917] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.918 [2024-12-11 14:58:07.536930] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.918 [2024-12-11 14:58:07.536937] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.918 [2024-12-11 14:58:07.536943] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8c7100) on tqpair=0x865690 00:21:24.918 [2024-12-11 14:58:07.536951] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:24.918 [2024-12-11 14:58:07.536959] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:21:24.918 [2024-12-11 14:58:07.536972] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:21:24.918 [2024-12-11 14:58:07.536987] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:21:24.918 [2024-12-11 14:58:07.537001] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.918 [2024-12-11 14:58:07.537013] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x865690) 00:21:24.918 [2024-12-11 14:58:07.537025] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.918 [2024-12-11 14:58:07.537046] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c7100, cid 0, qid 0 00:21:24.918 [2024-12-11 14:58:07.537167] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:24.918 [2024-12-11 14:58:07.537181] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:24.918 [2024-12-11 14:58:07.537188] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:24.918 [2024-12-11 14:58:07.537194] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x865690): datao=0, datal=4096, cccid=0 00:21:24.918 [2024-12-11 14:58:07.537202] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8c7100) on tqpair(0x865690): expected_datao=0, payload_size=4096 00:21:24.918 [2024-12-11 14:58:07.537210] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.918 [2024-12-11 14:58:07.537227] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:24.918 [2024-12-11 14:58:07.537236] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:24.918 [2024-12-11 14:58:07.537248] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.918 [2024-12-11 14:58:07.537257] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.918 [2024-12-11 14:58:07.537264] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.918 [2024-12-11 14:58:07.537271] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8c7100) on tqpair=0x865690 00:21:24.918 [2024-12-11 14:58:07.537282] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:21:24.918 [2024-12-11 14:58:07.537290] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:21:24.918 [2024-12-11 14:58:07.537298] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:21:24.918 [2024-12-11 14:58:07.537304] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:21:24.918 [2024-12-11 14:58:07.537312] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:21:24.918 [2024-12-11 14:58:07.537320] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:21:24.918 [2024-12-11 14:58:07.537338] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:21:24.918 [2024-12-11 14:58:07.537354] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.918 [2024-12-11 14:58:07.537363] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.918 [2024-12-11 14:58:07.537369] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x865690) 00:21:24.918 [2024-12-11 14:58:07.537380] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:24.918 [2024-12-11 14:58:07.537402] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c7100, cid 0, qid 0 00:21:24.918 [2024-12-11 14:58:07.537484] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.918 [2024-12-11 14:58:07.537496] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.918 [2024-12-11 14:58:07.537503] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.918 [2024-12-11 14:58:07.537510] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8c7100) on tqpair=0x865690 00:21:24.918 [2024-12-11 14:58:07.537520] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.918 [2024-12-11 14:58:07.537527] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.918 [2024-12-11 14:58:07.537534] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x865690) 00:21:24.918 [2024-12-11 14:58:07.537551] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.918 [2024-12-11 14:58:07.537569] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.918 [2024-12-11 14:58:07.537577] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.918 [2024-12-11 14:58:07.537583] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x865690) 00:21:24.918 [2024-12-11 14:58:07.537592] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.918 [2024-12-11 14:58:07.537602] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.918 [2024-12-11 14:58:07.537609] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.918 [2024-12-11 14:58:07.537615] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x865690) 00:21:24.918 [2024-12-11 14:58:07.537624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.918 [2024-12-11 14:58:07.537634] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.918 [2024-12-11 14:58:07.537640] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.918 [2024-12-11 14:58:07.537647] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x865690) 00:21:24.918 [2024-12-11 14:58:07.537655] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.918 [2024-12-11 14:58:07.537664] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:21:24.918 [2024-12-11 14:58:07.537686] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:24.918 [2024-12-11 14:58:07.537699] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.918 [2024-12-11 14:58:07.537707] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x865690) 00:21:24.918 [2024-12-11 14:58:07.537717] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.918 [2024-12-11 14:58:07.537740] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c7100, cid 0, qid 0 00:21:24.918 [2024-12-11 14:58:07.537752] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c7280, cid 1, qid 0 00:21:24.918 [2024-12-11 14:58:07.537760] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c7400, cid 2, qid 0 00:21:24.918 [2024-12-11 14:58:07.537767] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c7580, cid 3, qid 0 00:21:24.918 [2024-12-11 14:58:07.537775] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c7700, cid 4, qid 0 00:21:24.918 [2024-12-11 14:58:07.537917] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.918 [2024-12-11 14:58:07.537929] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.918 [2024-12-11 14:58:07.537936] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.918 [2024-12-11 14:58:07.537943] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8c7700) on tqpair=0x865690 00:21:24.918 [2024-12-11 14:58:07.537951] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:21:24.918 [2024-12-11 14:58:07.537959] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:21:24.918 [2024-12-11 14:58:07.537975] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:21:24.918 [2024-12-11 14:58:07.537991] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:21:24.918 [2024-12-11 14:58:07.538003] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.918 [2024-12-11 14:58:07.538011] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.918 [2024-12-11 14:58:07.538020] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x865690) 00:21:24.919 [2024-12-11 14:58:07.538031] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:24.919 [2024-12-11 14:58:07.538053] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c7700, cid 4, qid 0 00:21:24.919 [2024-12-11 14:58:07.538181] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.919 [2024-12-11 14:58:07.538193] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.919 [2024-12-11 14:58:07.538200] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.919 [2024-12-11 14:58:07.538207] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8c7700) on tqpair=0x865690 00:21:24.919 [2024-12-11 14:58:07.538274] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:21:24.919 [2024-12-11 14:58:07.538293] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:21:24.919 [2024-12-11 14:58:07.538308] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.919 [2024-12-11 14:58:07.538316] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x865690) 00:21:24.919 [2024-12-11 14:58:07.538327] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.919 [2024-12-11 14:58:07.538348] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c7700, cid 4, qid 0 00:21:24.919 [2024-12-11 14:58:07.538486] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:24.919 [2024-12-11 14:58:07.538500] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:24.919 [2024-12-11 14:58:07.538508] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:24.919 [2024-12-11 14:58:07.538514] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x865690): datao=0, datal=4096, cccid=4 00:21:24.919 [2024-12-11 14:58:07.538522] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8c7700) on tqpair(0x865690): expected_datao=0, payload_size=4096 00:21:24.919 [2024-12-11 14:58:07.538529] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.919 [2024-12-11 14:58:07.538539] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:24.919 [2024-12-11 14:58:07.538556] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:24.919 [2024-12-11 14:58:07.582562] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.919 [2024-12-11 14:58:07.582580] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.919 [2024-12-11 14:58:07.582587] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.919 [2024-12-11 14:58:07.582594] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8c7700) on tqpair=0x865690 00:21:24.919 [2024-12-11 14:58:07.582618] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:21:24.919 [2024-12-11 14:58:07.582636] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:21:24.919 [2024-12-11 14:58:07.582669] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:21:24.919 [2024-12-11 14:58:07.582684] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.919 [2024-12-11 14:58:07.582692] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x865690) 00:21:24.919 [2024-12-11 14:58:07.582704] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.919 [2024-12-11 14:58:07.582727] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c7700, cid 4, qid 0 00:21:24.919 [2024-12-11 14:58:07.582882] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:24.919 [2024-12-11 14:58:07.582900] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:24.919 [2024-12-11 14:58:07.582909] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:24.919 [2024-12-11 14:58:07.582915] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x865690): datao=0, datal=4096, cccid=4 00:21:24.919 [2024-12-11 14:58:07.582922] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8c7700) on tqpair(0x865690): expected_datao=0, payload_size=4096 00:21:24.919 [2024-12-11 14:58:07.582930] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.919 [2024-12-11 14:58:07.582948] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:24.919 [2024-12-11 14:58:07.582957] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:24.919 [2024-12-11 14:58:07.623651] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.919 [2024-12-11 14:58:07.623671] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.919 [2024-12-11 14:58:07.623679] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.919 [2024-12-11 14:58:07.623686] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8c7700) on tqpair=0x865690 00:21:24.919 [2024-12-11 14:58:07.623710] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:21:24.919 [2024-12-11 14:58:07.623731] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:21:24.919 [2024-12-11 14:58:07.623746] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.919 [2024-12-11 14:58:07.623754] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x865690) 00:21:24.919 [2024-12-11 14:58:07.623766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.919 [2024-12-11 14:58:07.623789] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c7700, cid 4, qid 0 00:21:24.919 [2024-12-11 14:58:07.623895] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:24.919 [2024-12-11 14:58:07.623910] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:24.919 [2024-12-11 14:58:07.623917] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:24.919 [2024-12-11 14:58:07.623923] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x865690): datao=0, datal=4096, cccid=4 00:21:24.919 [2024-12-11 14:58:07.623931] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8c7700) on tqpair(0x865690): expected_datao=0, payload_size=4096 00:21:24.919 [2024-12-11 14:58:07.623938] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.919 [2024-12-11 14:58:07.623955] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:24.919 [2024-12-11 14:58:07.623965] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:24.919 [2024-12-11 14:58:07.664685] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.919 [2024-12-11 14:58:07.664704] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.919 [2024-12-11 14:58:07.664713] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.919 [2024-12-11 14:58:07.664720] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8c7700) on tqpair=0x865690 00:21:24.919 [2024-12-11 14:58:07.664734] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:21:24.919 [2024-12-11 14:58:07.664750] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:21:24.919 [2024-12-11 14:58:07.664767] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:21:24.919 [2024-12-11 14:58:07.664780] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:21:24.919 [2024-12-11 14:58:07.664789] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:21:24.919 [2024-12-11 14:58:07.664801] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:21:24.919 [2024-12-11 14:58:07.664811] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:21:24.919 [2024-12-11 14:58:07.664819] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:21:24.919 [2024-12-11 14:58:07.664828] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:21:24.919 [2024-12-11 14:58:07.664848] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.919 [2024-12-11 14:58:07.664857] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x865690) 00:21:24.919 [2024-12-11 14:58:07.664868] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.919 [2024-12-11 14:58:07.664880] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.919 [2024-12-11 14:58:07.664887] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.919 [2024-12-11 14:58:07.664893] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x865690) 00:21:24.919 [2024-12-11 14:58:07.664903] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.919 [2024-12-11 14:58:07.664930] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c7700, cid 4, qid 0 00:21:24.919 [2024-12-11 14:58:07.664942] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c7880, cid 5, qid 0 00:21:24.919 [2024-12-11 14:58:07.665032] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.919 [2024-12-11 14:58:07.665046] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.919 [2024-12-11 14:58:07.665054] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.919 [2024-12-11 14:58:07.665060] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8c7700) on tqpair=0x865690 00:21:24.919 [2024-12-11 14:58:07.665071] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.919 [2024-12-11 14:58:07.665080] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.919 [2024-12-11 14:58:07.665087] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.919 [2024-12-11 14:58:07.665094] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8c7880) on tqpair=0x865690 00:21:24.919 [2024-12-11 14:58:07.665109] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.919 [2024-12-11 14:58:07.665118] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x865690) 00:21:24.919 [2024-12-11 14:58:07.665129] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.919 [2024-12-11 14:58:07.665150] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c7880, cid 5, qid 0 00:21:24.919 [2024-12-11 14:58:07.665230] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.919 [2024-12-11 14:58:07.665244] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.919 [2024-12-11 14:58:07.665251] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.919 [2024-12-11 14:58:07.665257] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8c7880) on tqpair=0x865690 00:21:24.919 [2024-12-11 14:58:07.665273] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.919 [2024-12-11 14:58:07.665282] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x865690) 00:21:24.919 [2024-12-11 14:58:07.665292] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.919 [2024-12-11 14:58:07.665313] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c7880, cid 5, qid 0 00:21:24.919 [2024-12-11 14:58:07.665393] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.919 [2024-12-11 14:58:07.665407] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.919 [2024-12-11 14:58:07.665415] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.919 [2024-12-11 14:58:07.665422] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8c7880) on tqpair=0x865690 00:21:24.919 [2024-12-11 14:58:07.665437] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.920 [2024-12-11 14:58:07.665446] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x865690) 00:21:24.920 [2024-12-11 14:58:07.665456] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.920 [2024-12-11 14:58:07.665477] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c7880, cid 5, qid 0 00:21:24.920 [2024-12-11 14:58:07.669563] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.920 [2024-12-11 14:58:07.669580] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.920 [2024-12-11 14:58:07.669587] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.920 [2024-12-11 14:58:07.669594] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8c7880) on tqpair=0x865690 00:21:24.920 [2024-12-11 14:58:07.669620] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.920 [2024-12-11 14:58:07.669631] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x865690) 00:21:24.920 [2024-12-11 14:58:07.669642] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.920 [2024-12-11 14:58:07.669655] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.920 [2024-12-11 14:58:07.669663] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x865690) 00:21:24.920 [2024-12-11 14:58:07.669672] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.920 [2024-12-11 14:58:07.669684] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.920 [2024-12-11 14:58:07.669692] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x865690) 00:21:24.920 [2024-12-11 14:58:07.669701] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.920 [2024-12-11 14:58:07.669714] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.920 [2024-12-11 14:58:07.669721] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x865690) 00:21:24.920 [2024-12-11 14:58:07.669730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.920 [2024-12-11 14:58:07.669753] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c7880, cid 5, qid 0 00:21:24.920 [2024-12-11 14:58:07.669765] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c7700, cid 4, qid 0 00:21:24.920 [2024-12-11 14:58:07.669773] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c7a00, cid 6, qid 0 00:21:24.920 [2024-12-11 14:58:07.669781] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c7b80, cid 7, qid 0 00:21:24.920 [2024-12-11 14:58:07.669953] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:24.920 [2024-12-11 14:58:07.669968] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:24.920 [2024-12-11 14:58:07.669975] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:24.920 [2024-12-11 14:58:07.669982] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x865690): datao=0, datal=8192, cccid=5 00:21:24.920 [2024-12-11 14:58:07.669989] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8c7880) on tqpair(0x865690): expected_datao=0, payload_size=8192 00:21:24.920 [2024-12-11 14:58:07.669996] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.920 [2024-12-11 14:58:07.670019] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:24.920 [2024-12-11 14:58:07.670029] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:24.920 [2024-12-11 14:58:07.670042] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:24.920 [2024-12-11 14:58:07.670052] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:24.920 [2024-12-11 14:58:07.670059] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:24.920 [2024-12-11 14:58:07.670065] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x865690): datao=0, datal=512, cccid=4 00:21:24.920 [2024-12-11 14:58:07.670073] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8c7700) on tqpair(0x865690): expected_datao=0, payload_size=512 00:21:24.920 [2024-12-11 14:58:07.670080] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.920 [2024-12-11 14:58:07.670089] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:24.920 [2024-12-11 14:58:07.670096] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:24.920 [2024-12-11 14:58:07.670105] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:24.920 [2024-12-11 14:58:07.670114] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:24.920 [2024-12-11 14:58:07.670120] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:24.920 [2024-12-11 14:58:07.670126] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x865690): datao=0, datal=512, cccid=6 00:21:24.920 [2024-12-11 14:58:07.670134] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8c7a00) on tqpair(0x865690): expected_datao=0, payload_size=512 00:21:24.920 [2024-12-11 14:58:07.670141] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.920 [2024-12-11 14:58:07.670150] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:24.920 [2024-12-11 14:58:07.670157] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:24.920 [2024-12-11 14:58:07.670165] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:24.920 [2024-12-11 14:58:07.670174] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:24.920 [2024-12-11 14:58:07.670181] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:24.920 [2024-12-11 14:58:07.670187] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x865690): datao=0, datal=4096, cccid=7 00:21:24.920 [2024-12-11 14:58:07.670194] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8c7b80) on tqpair(0x865690): expected_datao=0, payload_size=4096 00:21:24.920 [2024-12-11 14:58:07.670201] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.920 [2024-12-11 14:58:07.670210] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:24.920 [2024-12-11 14:58:07.670217] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:24.920 [2024-12-11 14:58:07.670226] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.920 [2024-12-11 14:58:07.670235] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.920 [2024-12-11 14:58:07.670241] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.920 [2024-12-11 14:58:07.670248] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8c7880) on tqpair=0x865690 00:21:24.920 [2024-12-11 14:58:07.670269] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.920 [2024-12-11 14:58:07.670281] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.920 [2024-12-11 14:58:07.670288] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.920 [2024-12-11 14:58:07.670294] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8c7700) on tqpair=0x865690 00:21:24.920 [2024-12-11 14:58:07.670325] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.920 [2024-12-11 14:58:07.670336] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.920 [2024-12-11 14:58:07.670343] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.920 [2024-12-11 14:58:07.670349] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8c7a00) on tqpair=0x865690 00:21:24.920 [2024-12-11 14:58:07.670359] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.920 [2024-12-11 14:58:07.670371] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.920 [2024-12-11 14:58:07.670378] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.920 [2024-12-11 14:58:07.670385] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8c7b80) on tqpair=0x865690 00:21:24.920 ===================================================== 00:21:24.920 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:24.920 ===================================================== 00:21:24.920 Controller Capabilities/Features 00:21:24.920 ================================ 00:21:24.920 Vendor ID: 8086 00:21:24.920 Subsystem Vendor ID: 8086 00:21:24.920 Serial Number: SPDK00000000000001 00:21:24.920 Model Number: SPDK bdev Controller 00:21:24.920 Firmware Version: 25.01 00:21:24.920 Recommended Arb Burst: 6 00:21:24.920 IEEE OUI Identifier: e4 d2 5c 00:21:24.920 Multi-path I/O 00:21:24.920 May have multiple subsystem ports: Yes 00:21:24.920 May have multiple controllers: Yes 00:21:24.920 Associated with SR-IOV VF: No 00:21:24.920 Max Data Transfer Size: 131072 00:21:24.920 Max Number of Namespaces: 32 00:21:24.920 Max Number of I/O Queues: 127 00:21:24.920 NVMe Specification Version (VS): 1.3 00:21:24.920 NVMe Specification Version (Identify): 1.3 00:21:24.920 Maximum Queue Entries: 128 00:21:24.920 Contiguous Queues Required: Yes 00:21:24.920 Arbitration Mechanisms Supported 00:21:24.920 Weighted Round Robin: Not Supported 00:21:24.920 Vendor Specific: Not Supported 00:21:24.920 Reset Timeout: 15000 ms 00:21:24.920 Doorbell Stride: 4 bytes 00:21:24.920 NVM Subsystem Reset: Not Supported 00:21:24.920 Command Sets Supported 00:21:24.920 NVM Command Set: Supported 00:21:24.920 Boot Partition: Not Supported 00:21:24.920 Memory Page Size Minimum: 4096 bytes 00:21:24.920 Memory Page Size Maximum: 4096 bytes 00:21:24.920 Persistent Memory Region: Not Supported 00:21:24.920 Optional Asynchronous Events Supported 00:21:24.920 Namespace Attribute Notices: Supported 00:21:24.920 Firmware Activation Notices: Not Supported 00:21:24.920 ANA Change Notices: Not Supported 00:21:24.920 PLE Aggregate Log Change Notices: Not Supported 00:21:24.920 LBA Status Info Alert Notices: Not Supported 00:21:24.920 EGE Aggregate Log Change Notices: Not Supported 00:21:24.920 Normal NVM Subsystem Shutdown event: Not Supported 00:21:24.920 Zone Descriptor Change Notices: Not Supported 00:21:24.920 Discovery Log Change Notices: Not Supported 00:21:24.920 Controller Attributes 00:21:24.920 128-bit Host Identifier: Supported 00:21:24.920 Non-Operational Permissive Mode: Not Supported 00:21:24.920 NVM Sets: Not Supported 00:21:24.920 Read Recovery Levels: Not Supported 00:21:24.920 Endurance Groups: Not Supported 00:21:24.920 Predictable Latency Mode: Not Supported 00:21:24.920 Traffic Based Keep ALive: Not Supported 00:21:24.920 Namespace Granularity: Not Supported 00:21:24.920 SQ Associations: Not Supported 00:21:24.920 UUID List: Not Supported 00:21:24.920 Multi-Domain Subsystem: Not Supported 00:21:24.920 Fixed Capacity Management: Not Supported 00:21:24.920 Variable Capacity Management: Not Supported 00:21:24.920 Delete Endurance Group: Not Supported 00:21:24.920 Delete NVM Set: Not Supported 00:21:24.920 Extended LBA Formats Supported: Not Supported 00:21:24.920 Flexible Data Placement Supported: Not Supported 00:21:24.920 00:21:24.920 Controller Memory Buffer Support 00:21:24.920 ================================ 00:21:24.920 Supported: No 00:21:24.920 00:21:24.920 Persistent Memory Region Support 00:21:24.920 ================================ 00:21:24.921 Supported: No 00:21:24.921 00:21:24.921 Admin Command Set Attributes 00:21:24.921 ============================ 00:21:24.921 Security Send/Receive: Not Supported 00:21:24.921 Format NVM: Not Supported 00:21:24.921 Firmware Activate/Download: Not Supported 00:21:24.921 Namespace Management: Not Supported 00:21:24.921 Device Self-Test: Not Supported 00:21:24.921 Directives: Not Supported 00:21:24.921 NVMe-MI: Not Supported 00:21:24.921 Virtualization Management: Not Supported 00:21:24.921 Doorbell Buffer Config: Not Supported 00:21:24.921 Get LBA Status Capability: Not Supported 00:21:24.921 Command & Feature Lockdown Capability: Not Supported 00:21:24.921 Abort Command Limit: 4 00:21:24.921 Async Event Request Limit: 4 00:21:24.921 Number of Firmware Slots: N/A 00:21:24.921 Firmware Slot 1 Read-Only: N/A 00:21:24.921 Firmware Activation Without Reset: N/A 00:21:24.921 Multiple Update Detection Support: N/A 00:21:24.921 Firmware Update Granularity: No Information Provided 00:21:24.921 Per-Namespace SMART Log: No 00:21:24.921 Asymmetric Namespace Access Log Page: Not Supported 00:21:24.921 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:21:24.921 Command Effects Log Page: Supported 00:21:24.921 Get Log Page Extended Data: Supported 00:21:24.921 Telemetry Log Pages: Not Supported 00:21:24.921 Persistent Event Log Pages: Not Supported 00:21:24.921 Supported Log Pages Log Page: May Support 00:21:24.921 Commands Supported & Effects Log Page: Not Supported 00:21:24.921 Feature Identifiers & Effects Log Page:May Support 00:21:24.921 NVMe-MI Commands & Effects Log Page: May Support 00:21:24.921 Data Area 4 for Telemetry Log: Not Supported 00:21:24.921 Error Log Page Entries Supported: 128 00:21:24.921 Keep Alive: Supported 00:21:24.921 Keep Alive Granularity: 10000 ms 00:21:24.921 00:21:24.921 NVM Command Set Attributes 00:21:24.921 ========================== 00:21:24.921 Submission Queue Entry Size 00:21:24.921 Max: 64 00:21:24.921 Min: 64 00:21:24.921 Completion Queue Entry Size 00:21:24.921 Max: 16 00:21:24.921 Min: 16 00:21:24.921 Number of Namespaces: 32 00:21:24.921 Compare Command: Supported 00:21:24.921 Write Uncorrectable Command: Not Supported 00:21:24.921 Dataset Management Command: Supported 00:21:24.921 Write Zeroes Command: Supported 00:21:24.921 Set Features Save Field: Not Supported 00:21:24.921 Reservations: Supported 00:21:24.921 Timestamp: Not Supported 00:21:24.921 Copy: Supported 00:21:24.921 Volatile Write Cache: Present 00:21:24.921 Atomic Write Unit (Normal): 1 00:21:24.921 Atomic Write Unit (PFail): 1 00:21:24.921 Atomic Compare & Write Unit: 1 00:21:24.921 Fused Compare & Write: Supported 00:21:24.921 Scatter-Gather List 00:21:24.921 SGL Command Set: Supported 00:21:24.921 SGL Keyed: Supported 00:21:24.921 SGL Bit Bucket Descriptor: Not Supported 00:21:24.921 SGL Metadata Pointer: Not Supported 00:21:24.921 Oversized SGL: Not Supported 00:21:24.921 SGL Metadata Address: Not Supported 00:21:24.921 SGL Offset: Supported 00:21:24.921 Transport SGL Data Block: Not Supported 00:21:24.921 Replay Protected Memory Block: Not Supported 00:21:24.921 00:21:24.921 Firmware Slot Information 00:21:24.921 ========================= 00:21:24.921 Active slot: 1 00:21:24.921 Slot 1 Firmware Revision: 25.01 00:21:24.921 00:21:24.921 00:21:24.921 Commands Supported and Effects 00:21:24.921 ============================== 00:21:24.921 Admin Commands 00:21:24.921 -------------- 00:21:24.921 Get Log Page (02h): Supported 00:21:24.921 Identify (06h): Supported 00:21:24.921 Abort (08h): Supported 00:21:24.921 Set Features (09h): Supported 00:21:24.921 Get Features (0Ah): Supported 00:21:24.921 Asynchronous Event Request (0Ch): Supported 00:21:24.921 Keep Alive (18h): Supported 00:21:24.921 I/O Commands 00:21:24.921 ------------ 00:21:24.921 Flush (00h): Supported LBA-Change 00:21:24.921 Write (01h): Supported LBA-Change 00:21:24.921 Read (02h): Supported 00:21:24.921 Compare (05h): Supported 00:21:24.921 Write Zeroes (08h): Supported LBA-Change 00:21:24.921 Dataset Management (09h): Supported LBA-Change 00:21:24.921 Copy (19h): Supported LBA-Change 00:21:24.921 00:21:24.921 Error Log 00:21:24.921 ========= 00:21:24.921 00:21:24.921 Arbitration 00:21:24.921 =========== 00:21:24.921 Arbitration Burst: 1 00:21:24.921 00:21:24.921 Power Management 00:21:24.921 ================ 00:21:24.921 Number of Power States: 1 00:21:24.921 Current Power State: Power State #0 00:21:24.921 Power State #0: 00:21:24.921 Max Power: 0.00 W 00:21:24.921 Non-Operational State: Operational 00:21:24.921 Entry Latency: Not Reported 00:21:24.921 Exit Latency: Not Reported 00:21:24.921 Relative Read Throughput: 0 00:21:24.921 Relative Read Latency: 0 00:21:24.921 Relative Write Throughput: 0 00:21:24.921 Relative Write Latency: 0 00:21:24.921 Idle Power: Not Reported 00:21:24.921 Active Power: Not Reported 00:21:24.921 Non-Operational Permissive Mode: Not Supported 00:21:24.921 00:21:24.921 Health Information 00:21:24.921 ================== 00:21:24.921 Critical Warnings: 00:21:24.921 Available Spare Space: OK 00:21:24.921 Temperature: OK 00:21:24.921 Device Reliability: OK 00:21:24.921 Read Only: No 00:21:24.921 Volatile Memory Backup: OK 00:21:24.921 Current Temperature: 0 Kelvin (-273 Celsius) 00:21:24.921 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:21:24.921 Available Spare: 0% 00:21:24.921 Available Spare Threshold: 0% 00:21:24.921 Life Percentage Used:[2024-12-11 14:58:07.670513] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.921 [2024-12-11 14:58:07.670526] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x865690) 00:21:24.921 [2024-12-11 14:58:07.670537] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.921 [2024-12-11 14:58:07.670570] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c7b80, cid 7, qid 0 00:21:24.921 [2024-12-11 14:58:07.670677] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.921 [2024-12-11 14:58:07.670690] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.921 [2024-12-11 14:58:07.670697] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.921 [2024-12-11 14:58:07.670704] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8c7b80) on tqpair=0x865690 00:21:24.921 [2024-12-11 14:58:07.670751] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:21:24.921 [2024-12-11 14:58:07.670771] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8c7100) on tqpair=0x865690 00:21:24.921 [2024-12-11 14:58:07.670781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.921 [2024-12-11 14:58:07.670791] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8c7280) on tqpair=0x865690 00:21:24.921 [2024-12-11 14:58:07.670798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.921 [2024-12-11 14:58:07.670807] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8c7400) on tqpair=0x865690 00:21:24.921 [2024-12-11 14:58:07.670814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.921 [2024-12-11 14:58:07.670822] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8c7580) on tqpair=0x865690 00:21:24.921 [2024-12-11 14:58:07.670830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.921 [2024-12-11 14:58:07.670842] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.921 [2024-12-11 14:58:07.670850] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.921 [2024-12-11 14:58:07.670856] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x865690) 00:21:24.921 [2024-12-11 14:58:07.670867] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.921 [2024-12-11 14:58:07.670889] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c7580, cid 3, qid 0 00:21:24.921 [2024-12-11 14:58:07.670995] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.921 [2024-12-11 14:58:07.671007] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.921 [2024-12-11 14:58:07.671014] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.922 [2024-12-11 14:58:07.671021] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8c7580) on tqpair=0x865690 00:21:24.922 [2024-12-11 14:58:07.671032] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.922 [2024-12-11 14:58:07.671039] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.922 [2024-12-11 14:58:07.671046] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x865690) 00:21:24.922 [2024-12-11 14:58:07.671056] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.922 [2024-12-11 14:58:07.671082] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c7580, cid 3, qid 0 00:21:24.922 [2024-12-11 14:58:07.671174] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.922 [2024-12-11 14:58:07.671189] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.922 [2024-12-11 14:58:07.671196] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.922 [2024-12-11 14:58:07.671203] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8c7580) on tqpair=0x865690 00:21:24.922 [2024-12-11 14:58:07.671210] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:21:24.922 [2024-12-11 14:58:07.671218] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:21:24.922 [2024-12-11 14:58:07.671234] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.922 [2024-12-11 14:58:07.671243] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.922 [2024-12-11 14:58:07.671250] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x865690) 00:21:24.922 [2024-12-11 14:58:07.671260] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.922 [2024-12-11 14:58:07.671281] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c7580, cid 3, qid 0 00:21:24.922 [2024-12-11 14:58:07.671357] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.922 [2024-12-11 14:58:07.671369] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.922 [2024-12-11 14:58:07.671377] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.922 [2024-12-11 14:58:07.671383] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8c7580) on tqpair=0x865690 00:21:24.922 [2024-12-11 14:58:07.671399] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.922 [2024-12-11 14:58:07.671408] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.922 [2024-12-11 14:58:07.671415] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x865690) 00:21:24.922 [2024-12-11 14:58:07.671425] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.922 [2024-12-11 14:58:07.671446] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c7580, cid 3, qid 0 00:21:24.922 [2024-12-11 14:58:07.671522] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.922 [2024-12-11 14:58:07.671534] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.922 [2024-12-11 14:58:07.671541] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.922 [2024-12-11 14:58:07.671557] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8c7580) on tqpair=0x865690 00:21:24.922 [2024-12-11 14:58:07.671574] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.922 [2024-12-11 14:58:07.671584] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.922 [2024-12-11 14:58:07.671590] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x865690) 00:21:24.922 [2024-12-11 14:58:07.671601] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.922 [2024-12-11 14:58:07.671622] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c7580, cid 3, qid 0 00:21:24.922 [2024-12-11 14:58:07.671694] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.922 [2024-12-11 14:58:07.671706] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.922 [2024-12-11 14:58:07.671713] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.922 [2024-12-11 14:58:07.671720] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8c7580) on tqpair=0x865690 00:21:24.922 [2024-12-11 14:58:07.671736] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.922 [2024-12-11 14:58:07.671745] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.922 [2024-12-11 14:58:07.671752] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x865690) 00:21:24.922 [2024-12-11 14:58:07.671762] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.922 [2024-12-11 14:58:07.671787] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c7580, cid 3, qid 0 00:21:24.922 [2024-12-11 14:58:07.671868] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.922 [2024-12-11 14:58:07.671882] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.922 [2024-12-11 14:58:07.671890] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.922 [2024-12-11 14:58:07.671896] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8c7580) on tqpair=0x865690 00:21:24.922 [2024-12-11 14:58:07.671912] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.922 [2024-12-11 14:58:07.671922] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.922 [2024-12-11 14:58:07.671928] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x865690) 00:21:24.922 [2024-12-11 14:58:07.671939] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.922 [2024-12-11 14:58:07.671959] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c7580, cid 3, qid 0 00:21:24.922 [2024-12-11 14:58:07.672030] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.922 [2024-12-11 14:58:07.672044] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.922 [2024-12-11 14:58:07.672051] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.922 [2024-12-11 14:58:07.672058] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8c7580) on tqpair=0x865690 00:21:24.922 [2024-12-11 14:58:07.672073] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.922 [2024-12-11 14:58:07.672083] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.922 [2024-12-11 14:58:07.672089] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x865690) 00:21:24.922 [2024-12-11 14:58:07.672099] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.922 [2024-12-11 14:58:07.672120] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c7580, cid 3, qid 0 00:21:24.922 [2024-12-11 14:58:07.672196] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.922 [2024-12-11 14:58:07.672208] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.922 [2024-12-11 14:58:07.672215] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.922 [2024-12-11 14:58:07.672222] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8c7580) on tqpair=0x865690 00:21:24.922 [2024-12-11 14:58:07.672237] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.922 [2024-12-11 14:58:07.672247] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.922 [2024-12-11 14:58:07.672253] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x865690) 00:21:24.922 [2024-12-11 14:58:07.672263] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.922 [2024-12-11 14:58:07.672284] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c7580, cid 3, qid 0 00:21:24.922 [2024-12-11 14:58:07.672359] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.922 [2024-12-11 14:58:07.672371] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.922 [2024-12-11 14:58:07.672378] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.922 [2024-12-11 14:58:07.672385] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8c7580) on tqpair=0x865690 00:21:24.922 [2024-12-11 14:58:07.672401] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.922 [2024-12-11 14:58:07.672410] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.922 [2024-12-11 14:58:07.672416] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x865690) 00:21:24.922 [2024-12-11 14:58:07.672427] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.922 [2024-12-11 14:58:07.672447] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c7580, cid 3, qid 0 00:21:24.922 [2024-12-11 14:58:07.672522] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.922 [2024-12-11 14:58:07.672536] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.922 [2024-12-11 14:58:07.672551] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.922 [2024-12-11 14:58:07.672559] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8c7580) on tqpair=0x865690 00:21:24.922 [2024-12-11 14:58:07.672576] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.922 [2024-12-11 14:58:07.672585] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.922 [2024-12-11 14:58:07.672592] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x865690) 00:21:24.922 [2024-12-11 14:58:07.672602] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.922 [2024-12-11 14:58:07.672623] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c7580, cid 3, qid 0 00:21:24.922 [2024-12-11 14:58:07.672698] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.922 [2024-12-11 14:58:07.672711] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.922 [2024-12-11 14:58:07.672719] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.922 [2024-12-11 14:58:07.672726] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8c7580) on tqpair=0x865690 00:21:24.922 [2024-12-11 14:58:07.672742] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.922 [2024-12-11 14:58:07.672751] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.922 [2024-12-11 14:58:07.672757] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x865690) 00:21:24.922 [2024-12-11 14:58:07.672768] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.922 [2024-12-11 14:58:07.672788] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c7580, cid 3, qid 0 00:21:24.922 [2024-12-11 14:58:07.672858] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.922 [2024-12-11 14:58:07.672870] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.922 [2024-12-11 14:58:07.672877] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.922 [2024-12-11 14:58:07.672884] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8c7580) on tqpair=0x865690 00:21:24.922 [2024-12-11 14:58:07.672900] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.922 [2024-12-11 14:58:07.672909] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.922 [2024-12-11 14:58:07.672916] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x865690) 00:21:24.922 [2024-12-11 14:58:07.672926] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.922 [2024-12-11 14:58:07.672946] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c7580, cid 3, qid 0 00:21:24.922 [2024-12-11 14:58:07.673023] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.922 [2024-12-11 14:58:07.673035] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.922 [2024-12-11 14:58:07.673042] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.922 [2024-12-11 14:58:07.673049] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8c7580) on tqpair=0x865690 00:21:24.923 [2024-12-11 14:58:07.673064] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.923 [2024-12-11 14:58:07.673073] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.923 [2024-12-11 14:58:07.673080] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x865690) 00:21:24.923 [2024-12-11 14:58:07.673090] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.923 [2024-12-11 14:58:07.673111] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c7580, cid 3, qid 0 00:21:24.923 [2024-12-11 14:58:07.673181] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.923 [2024-12-11 14:58:07.673201] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.923 [2024-12-11 14:58:07.673209] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.923 [2024-12-11 14:58:07.673216] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8c7580) on tqpair=0x865690 00:21:24.923 [2024-12-11 14:58:07.673232] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.923 [2024-12-11 14:58:07.673241] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.923 [2024-12-11 14:58:07.673248] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x865690) 00:21:24.923 [2024-12-11 14:58:07.673258] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.923 [2024-12-11 14:58:07.673279] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c7580, cid 3, qid 0 00:21:24.923 [2024-12-11 14:58:07.673356] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.923 [2024-12-11 14:58:07.673369] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.923 [2024-12-11 14:58:07.673376] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.923 [2024-12-11 14:58:07.673383] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8c7580) on tqpair=0x865690 00:21:24.923 [2024-12-11 14:58:07.673399] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.923 [2024-12-11 14:58:07.673408] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.923 [2024-12-11 14:58:07.673415] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x865690) 00:21:24.923 [2024-12-11 14:58:07.673425] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.923 [2024-12-11 14:58:07.673445] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c7580, cid 3, qid 0 00:21:24.923 [2024-12-11 14:58:07.673520] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.923 [2024-12-11 14:58:07.673532] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.923 [2024-12-11 14:58:07.673539] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.923 [2024-12-11 14:58:07.677558] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8c7580) on tqpair=0x865690 00:21:24.923 [2024-12-11 14:58:07.677582] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:24.923 [2024-12-11 14:58:07.677592] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:24.923 [2024-12-11 14:58:07.677599] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x865690) 00:21:24.923 [2024-12-11 14:58:07.677609] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.923 [2024-12-11 14:58:07.677632] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8c7580, cid 3, qid 0 00:21:24.923 [2024-12-11 14:58:07.677719] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:24.923 [2024-12-11 14:58:07.677733] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:24.923 [2024-12-11 14:58:07.677740] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:24.923 [2024-12-11 14:58:07.677747] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8c7580) on tqpair=0x865690 00:21:24.923 [2024-12-11 14:58:07.677760] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 6 milliseconds 00:21:25.181 0% 00:21:25.181 Data Units Read: 0 00:21:25.181 Data Units Written: 0 00:21:25.181 Host Read Commands: 0 00:21:25.181 Host Write Commands: 0 00:21:25.181 Controller Busy Time: 0 minutes 00:21:25.181 Power Cycles: 0 00:21:25.181 Power On Hours: 0 hours 00:21:25.181 Unsafe Shutdowns: 0 00:21:25.181 Unrecoverable Media Errors: 0 00:21:25.181 Lifetime Error Log Entries: 0 00:21:25.181 Warning Temperature Time: 0 minutes 00:21:25.181 Critical Temperature Time: 0 minutes 00:21:25.181 00:21:25.181 Number of Queues 00:21:25.181 ================ 00:21:25.181 Number of I/O Submission Queues: 127 00:21:25.181 Number of I/O Completion Queues: 127 00:21:25.181 00:21:25.181 Active Namespaces 00:21:25.181 ================= 00:21:25.181 Namespace ID:1 00:21:25.181 Error Recovery Timeout: Unlimited 00:21:25.181 Command Set Identifier: NVM (00h) 00:21:25.181 Deallocate: Supported 00:21:25.181 Deallocated/Unwritten Error: Not Supported 00:21:25.181 Deallocated Read Value: Unknown 00:21:25.181 Deallocate in Write Zeroes: Not Supported 00:21:25.181 Deallocated Guard Field: 0xFFFF 00:21:25.181 Flush: Supported 00:21:25.181 Reservation: Supported 00:21:25.181 Namespace Sharing Capabilities: Multiple Controllers 00:21:25.181 Size (in LBAs): 131072 (0GiB) 00:21:25.181 Capacity (in LBAs): 131072 (0GiB) 00:21:25.181 Utilization (in LBAs): 131072 (0GiB) 00:21:25.181 NGUID: ABCDEF0123456789ABCDEF0123456789 00:21:25.181 EUI64: ABCDEF0123456789 00:21:25.181 UUID: 92a5a73d-98af-4d9a-ad06-a7b686f7c94b 00:21:25.181 Thin Provisioning: Not Supported 00:21:25.181 Per-NS Atomic Units: Yes 00:21:25.181 Atomic Boundary Size (Normal): 0 00:21:25.181 Atomic Boundary Size (PFail): 0 00:21:25.181 Atomic Boundary Offset: 0 00:21:25.181 Maximum Single Source Range Length: 65535 00:21:25.181 Maximum Copy Length: 65535 00:21:25.181 Maximum Source Range Count: 1 00:21:25.181 NGUID/EUI64 Never Reused: No 00:21:25.181 Namespace Write Protected: No 00:21:25.181 Number of LBA Formats: 1 00:21:25.181 Current LBA Format: LBA Format #00 00:21:25.181 LBA Format #00: Data Size: 512 Metadata Size: 0 00:21:25.181 00:21:25.181 14:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:21:25.181 14:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:25.181 14:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.181 14:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:25.181 14:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.181 14:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:21:25.181 14:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:21:25.181 14:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:25.181 14:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:21:25.181 14:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:25.181 14:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:21:25.181 14:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:25.181 14:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:25.181 rmmod nvme_tcp 00:21:25.181 rmmod nvme_fabrics 00:21:25.181 rmmod nvme_keyring 00:21:25.181 14:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:25.181 14:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:21:25.181 14:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:21:25.181 14:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 731289 ']' 00:21:25.181 14:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 731289 00:21:25.181 14:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 731289 ']' 00:21:25.181 14:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 731289 00:21:25.181 14:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:21:25.181 14:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:25.181 14:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 731289 00:21:25.181 14:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:25.181 14:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:25.181 14:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 731289' 00:21:25.181 killing process with pid 731289 00:21:25.181 14:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 731289 00:21:25.181 14:58:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 731289 00:21:25.439 14:58:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:25.439 14:58:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:25.439 14:58:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:25.439 14:58:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:21:25.439 14:58:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:21:25.439 14:58:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:25.439 14:58:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:21:25.439 14:58:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:25.439 14:58:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:25.439 14:58:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:25.439 14:58:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:25.439 14:58:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:27.349 14:58:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:27.349 00:21:27.349 real 0m5.693s 00:21:27.349 user 0m5.113s 00:21:27.349 sys 0m2.024s 00:21:27.349 14:58:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:27.349 14:58:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:27.349 ************************************ 00:21:27.349 END TEST nvmf_identify 00:21:27.349 ************************************ 00:21:27.349 14:58:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:21:27.349 14:58:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:27.349 14:58:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:27.349 14:58:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:27.349 ************************************ 00:21:27.349 START TEST nvmf_perf 00:21:27.349 ************************************ 00:21:27.349 14:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:21:27.608 * Looking for test storage... 00:21:27.608 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:27.608 14:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:27.608 14:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:21:27.608 14:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:27.608 14:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:27.608 14:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:27.608 14:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:27.608 14:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:27.608 14:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:21:27.608 14:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:21:27.608 14:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:21:27.608 14:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:21:27.608 14:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:21:27.608 14:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:21:27.608 14:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:21:27.608 14:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:27.608 14:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:21:27.608 14:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:21:27.608 14:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:27.608 14:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:27.608 14:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:21:27.608 14:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:21:27.608 14:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:27.608 14:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:21:27.608 14:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:21:27.608 14:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:21:27.608 14:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:21:27.608 14:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:27.608 14:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:21:27.608 14:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:21:27.608 14:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:27.608 14:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:27.608 14:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:21:27.608 14:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:27.608 14:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:27.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:27.608 --rc genhtml_branch_coverage=1 00:21:27.608 --rc genhtml_function_coverage=1 00:21:27.608 --rc genhtml_legend=1 00:21:27.608 --rc geninfo_all_blocks=1 00:21:27.608 --rc geninfo_unexecuted_blocks=1 00:21:27.608 00:21:27.608 ' 00:21:27.608 14:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:27.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:27.608 --rc genhtml_branch_coverage=1 00:21:27.608 --rc genhtml_function_coverage=1 00:21:27.608 --rc genhtml_legend=1 00:21:27.608 --rc geninfo_all_blocks=1 00:21:27.608 --rc geninfo_unexecuted_blocks=1 00:21:27.608 00:21:27.608 ' 00:21:27.608 14:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:27.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:27.608 --rc genhtml_branch_coverage=1 00:21:27.608 --rc genhtml_function_coverage=1 00:21:27.608 --rc genhtml_legend=1 00:21:27.608 --rc geninfo_all_blocks=1 00:21:27.608 --rc geninfo_unexecuted_blocks=1 00:21:27.608 00:21:27.608 ' 00:21:27.608 14:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:27.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:27.608 --rc genhtml_branch_coverage=1 00:21:27.608 --rc genhtml_function_coverage=1 00:21:27.609 --rc genhtml_legend=1 00:21:27.609 --rc geninfo_all_blocks=1 00:21:27.609 --rc geninfo_unexecuted_blocks=1 00:21:27.609 00:21:27.609 ' 00:21:27.609 14:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:27.609 14:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:21:27.609 14:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:27.609 14:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:27.609 14:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:27.609 14:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:27.609 14:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:27.609 14:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:27.609 14:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:27.609 14:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:27.609 14:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:27.609 14:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:27.609 14:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:27.609 14:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:27.609 14:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:27.609 14:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:27.609 14:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:27.609 14:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:27.609 14:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:27.609 14:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:21:27.609 14:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:27.609 14:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:27.609 14:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:27.609 14:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.609 14:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.609 14:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.609 14:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:21:27.609 14:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.609 14:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:21:27.609 14:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:27.609 14:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:27.609 14:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:27.609 14:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:27.609 14:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:27.609 14:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:27.609 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:27.609 14:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:27.609 14:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:27.609 14:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:27.609 14:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:27.609 14:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:27.609 14:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:27.609 14:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:21:27.609 14:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:27.609 14:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:27.609 14:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:27.609 14:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:27.609 14:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:27.609 14:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:27.609 14:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:27.609 14:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:27.609 14:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:27.609 14:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:27.609 14:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:21:27.609 14:58:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:30.140 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:30.140 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:21:30.140 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:30.140 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:30.140 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:30.140 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:30.140 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:30.140 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:21:30.140 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:30.140 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:21:30.140 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:21:30.140 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:21:30.140 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:21:30.140 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:21:30.140 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:21:30.140 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:30.140 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:30.140 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:30.140 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:30.140 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:30.140 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:30.140 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:30.140 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:30.140 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:30.140 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:30.140 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:30.140 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:30.140 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:30.140 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:30.140 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:30.140 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:30.140 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:30.140 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:30.140 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:30.140 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:30.140 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:30.140 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:30.140 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:30.140 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:30.140 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:30.140 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:30.140 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:30.140 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:30.140 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:30.140 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:30.140 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:30.140 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:30.140 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:30.140 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:30.140 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:30.140 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:30.140 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:30.141 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:30.141 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:30.141 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:30.141 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:30.141 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:30.141 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:30.141 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:30.141 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:30.141 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:30.141 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:30.141 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:30.141 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:30.141 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:30.141 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:30.141 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:30.141 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:30.141 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:30.141 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:30.141 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:30.141 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:30.141 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:30.141 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:21:30.141 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:30.141 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:30.141 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:30.141 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:30.141 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:30.141 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:30.141 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:30.141 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:30.141 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:30.141 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:30.141 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:30.141 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:30.141 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:30.141 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:30.141 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:30.141 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:30.141 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:30.141 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:30.141 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:30.141 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:30.141 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:30.141 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:30.141 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:30.141 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:30.141 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:30.141 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:30.141 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:30.141 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.328 ms 00:21:30.141 00:21:30.141 --- 10.0.0.2 ping statistics --- 00:21:30.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:30.141 rtt min/avg/max/mdev = 0.328/0.328/0.328/0.000 ms 00:21:30.141 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:30.141 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:30.141 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:21:30.141 00:21:30.141 --- 10.0.0.1 ping statistics --- 00:21:30.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:30.141 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:21:30.141 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:30.141 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:21:30.141 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:30.141 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:30.141 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:30.141 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:30.141 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:30.141 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:30.141 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:30.141 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:21:30.141 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:30.141 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:30.141 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:30.141 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=733373 00:21:30.141 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:30.141 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 733373 00:21:30.141 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 733373 ']' 00:21:30.141 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:30.141 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:30.141 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:30.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:30.141 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:30.141 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:30.141 [2024-12-11 14:58:12.696086] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:21:30.141 [2024-12-11 14:58:12.696182] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:30.141 [2024-12-11 14:58:12.772496] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:30.141 [2024-12-11 14:58:12.834957] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:30.141 [2024-12-11 14:58:12.835007] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:30.141 [2024-12-11 14:58:12.835020] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:30.141 [2024-12-11 14:58:12.835031] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:30.141 [2024-12-11 14:58:12.835041] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:30.141 [2024-12-11 14:58:12.836592] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:21:30.141 [2024-12-11 14:58:12.836620] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:21:30.141 [2024-12-11 14:58:12.836671] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:21:30.141 [2024-12-11 14:58:12.836674] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:21:30.399 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:30.399 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:21:30.399 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:30.399 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:30.399 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:30.399 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:30.399 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:21:30.399 14:58:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:21:33.677 14:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:21:33.677 14:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:21:33.677 14:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:21:33.677 14:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:33.935 14:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:21:33.935 14:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:21:33.935 14:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:21:33.935 14:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:21:33.935 14:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:34.193 [2024-12-11 14:58:16.951138] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:34.450 14:58:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:34.707 14:58:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:21:34.707 14:58:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:34.965 14:58:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:21:34.965 14:58:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:21:35.222 14:58:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:35.479 [2024-12-11 14:58:18.039158] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:35.479 14:58:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:35.737 14:58:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:21:35.737 14:58:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:21:35.737 14:58:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:21:35.737 14:58:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:21:37.109 Initializing NVMe Controllers 00:21:37.109 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:21:37.109 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:21:37.109 Initialization complete. Launching workers. 00:21:37.109 ======================================================== 00:21:37.109 Latency(us) 00:21:37.109 Device Information : IOPS MiB/s Average min max 00:21:37.109 PCIE (0000:88:00.0) NSID 1 from core 0: 84755.62 331.08 377.11 15.72 7274.53 00:21:37.109 ======================================================== 00:21:37.109 Total : 84755.62 331.08 377.11 15.72 7274.53 00:21:37.109 00:21:37.109 14:58:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:38.482 Initializing NVMe Controllers 00:21:38.482 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:38.482 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:38.482 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:38.482 Initialization complete. Launching workers. 00:21:38.482 ======================================================== 00:21:38.482 Latency(us) 00:21:38.482 Device Information : IOPS MiB/s Average min max 00:21:38.482 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 81.00 0.32 12710.15 141.02 45848.69 00:21:38.482 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 58.00 0.23 17593.84 6990.34 48873.01 00:21:38.482 ======================================================== 00:21:38.482 Total : 139.00 0.54 14747.94 141.02 48873.01 00:21:38.482 00:21:38.482 14:58:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:39.855 Initializing NVMe Controllers 00:21:39.855 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:39.855 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:39.855 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:39.855 Initialization complete. Launching workers. 00:21:39.855 ======================================================== 00:21:39.855 Latency(us) 00:21:39.855 Device Information : IOPS MiB/s Average min max 00:21:39.855 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7746.66 30.26 4135.32 762.26 11203.37 00:21:39.855 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3904.32 15.25 8252.91 6865.66 15819.48 00:21:39.855 ======================================================== 00:21:39.855 Total : 11650.98 45.51 5515.15 762.26 15819.48 00:21:39.855 00:21:39.855 14:58:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:21:39.855 14:58:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:21:39.855 14:58:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:42.384 Initializing NVMe Controllers 00:21:42.384 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:42.384 Controller IO queue size 128, less than required. 00:21:42.384 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:42.384 Controller IO queue size 128, less than required. 00:21:42.384 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:42.384 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:42.384 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:42.384 Initialization complete. Launching workers. 00:21:42.384 ======================================================== 00:21:42.384 Latency(us) 00:21:42.384 Device Information : IOPS MiB/s Average min max 00:21:42.384 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1542.36 385.59 84865.71 65172.70 136610.71 00:21:42.384 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 577.45 144.36 232331.70 70441.86 346678.58 00:21:42.384 ======================================================== 00:21:42.384 Total : 2119.81 529.95 125036.40 65172.70 346678.58 00:21:42.384 00:21:42.384 14:58:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:21:42.641 No valid NVMe controllers or AIO or URING devices found 00:21:42.641 Initializing NVMe Controllers 00:21:42.641 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:42.641 Controller IO queue size 128, less than required. 00:21:42.641 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:42.641 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:21:42.641 Controller IO queue size 128, less than required. 00:21:42.641 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:42.641 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:21:42.641 WARNING: Some requested NVMe devices were skipped 00:21:42.641 14:58:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:21:45.170 Initializing NVMe Controllers 00:21:45.170 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:45.170 Controller IO queue size 128, less than required. 00:21:45.170 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:45.170 Controller IO queue size 128, less than required. 00:21:45.170 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:45.170 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:45.170 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:45.170 Initialization complete. Launching workers. 00:21:45.170 00:21:45.170 ==================== 00:21:45.170 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:21:45.170 TCP transport: 00:21:45.170 polls: 9896 00:21:45.170 idle_polls: 6734 00:21:45.170 sock_completions: 3162 00:21:45.170 nvme_completions: 5969 00:21:45.170 submitted_requests: 9028 00:21:45.170 queued_requests: 1 00:21:45.170 00:21:45.170 ==================== 00:21:45.170 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:21:45.170 TCP transport: 00:21:45.170 polls: 10517 00:21:45.170 idle_polls: 6614 00:21:45.170 sock_completions: 3903 00:21:45.170 nvme_completions: 6361 00:21:45.170 submitted_requests: 9496 00:21:45.170 queued_requests: 1 00:21:45.170 ======================================================== 00:21:45.170 Latency(us) 00:21:45.170 Device Information : IOPS MiB/s Average min max 00:21:45.170 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1491.96 372.99 87410.95 55107.13 144168.01 00:21:45.170 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1589.96 397.49 82045.79 40689.37 117306.38 00:21:45.170 ======================================================== 00:21:45.170 Total : 3081.92 770.48 84643.07 40689.37 144168.01 00:21:45.170 00:21:45.170 14:58:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:21:45.170 14:58:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:45.428 14:58:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:21:45.428 14:58:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:21:45.428 14:58:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:21:45.428 14:58:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:45.428 14:58:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:21:45.428 14:58:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:45.428 14:58:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:21:45.428 14:58:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:45.428 14:58:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:45.428 rmmod nvme_tcp 00:21:45.686 rmmod nvme_fabrics 00:21:45.686 rmmod nvme_keyring 00:21:45.686 14:58:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:45.686 14:58:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:21:45.686 14:58:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:21:45.686 14:58:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 733373 ']' 00:21:45.686 14:58:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 733373 00:21:45.686 14:58:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 733373 ']' 00:21:45.686 14:58:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 733373 00:21:45.686 14:58:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:21:45.686 14:58:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:45.686 14:58:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 733373 00:21:45.686 14:58:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:45.686 14:58:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:45.686 14:58:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 733373' 00:21:45.686 killing process with pid 733373 00:21:45.686 14:58:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 733373 00:21:45.686 14:58:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 733373 00:21:47.585 14:58:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:47.585 14:58:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:47.585 14:58:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:47.585 14:58:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:21:47.585 14:58:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:21:47.585 14:58:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:21:47.585 14:58:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:47.585 14:58:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:47.585 14:58:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:47.585 14:58:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:47.585 14:58:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:47.585 14:58:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:49.493 14:58:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:49.493 00:21:49.493 real 0m21.791s 00:21:49.493 user 1m6.842s 00:21:49.493 sys 0m5.734s 00:21:49.493 14:58:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:49.493 14:58:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:49.493 ************************************ 00:21:49.493 END TEST nvmf_perf 00:21:49.493 ************************************ 00:21:49.493 14:58:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:49.493 14:58:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:49.493 14:58:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:49.493 14:58:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:49.493 ************************************ 00:21:49.493 START TEST nvmf_fio_host 00:21:49.493 ************************************ 00:21:49.493 14:58:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:49.493 * Looking for test storage... 00:21:49.493 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:49.493 14:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:49.493 14:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:21:49.493 14:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:49.493 14:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:49.493 14:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:49.493 14:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:49.493 14:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:49.493 14:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:21:49.493 14:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:21:49.493 14:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:21:49.493 14:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:21:49.493 14:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:21:49.493 14:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:21:49.493 14:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:21:49.493 14:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:49.493 14:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:21:49.494 14:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:21:49.494 14:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:49.494 14:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:49.494 14:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:21:49.494 14:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:21:49.494 14:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:49.494 14:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:21:49.494 14:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:21:49.494 14:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:21:49.494 14:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:21:49.494 14:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:49.494 14:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:21:49.494 14:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:21:49.494 14:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:49.494 14:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:49.494 14:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:21:49.494 14:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:49.494 14:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:49.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:49.494 --rc genhtml_branch_coverage=1 00:21:49.494 --rc genhtml_function_coverage=1 00:21:49.494 --rc genhtml_legend=1 00:21:49.494 --rc geninfo_all_blocks=1 00:21:49.494 --rc geninfo_unexecuted_blocks=1 00:21:49.494 00:21:49.494 ' 00:21:49.494 14:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:49.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:49.494 --rc genhtml_branch_coverage=1 00:21:49.494 --rc genhtml_function_coverage=1 00:21:49.494 --rc genhtml_legend=1 00:21:49.494 --rc geninfo_all_blocks=1 00:21:49.494 --rc geninfo_unexecuted_blocks=1 00:21:49.494 00:21:49.494 ' 00:21:49.494 14:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:49.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:49.494 --rc genhtml_branch_coverage=1 00:21:49.494 --rc genhtml_function_coverage=1 00:21:49.494 --rc genhtml_legend=1 00:21:49.494 --rc geninfo_all_blocks=1 00:21:49.494 --rc geninfo_unexecuted_blocks=1 00:21:49.494 00:21:49.494 ' 00:21:49.494 14:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:49.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:49.494 --rc genhtml_branch_coverage=1 00:21:49.494 --rc genhtml_function_coverage=1 00:21:49.494 --rc genhtml_legend=1 00:21:49.494 --rc geninfo_all_blocks=1 00:21:49.494 --rc geninfo_unexecuted_blocks=1 00:21:49.494 00:21:49.494 ' 00:21:49.494 14:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:49.494 14:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:21:49.494 14:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:49.494 14:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:49.494 14:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:49.494 14:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:49.494 14:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:49.494 14:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:49.494 14:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:21:49.494 14:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:49.494 14:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:49.494 14:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:21:49.494 14:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:49.494 14:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:49.494 14:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:49.494 14:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:49.494 14:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:49.494 14:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:49.494 14:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:49.494 14:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:49.494 14:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:49.494 14:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:49.494 14:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:49.494 14:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:49.494 14:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:49.494 14:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:49.494 14:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:49.494 14:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:49.494 14:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:49.494 14:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:21:49.494 14:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:49.494 14:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:49.494 14:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:49.494 14:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:49.494 14:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:49.494 14:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:49.494 14:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:21:49.494 14:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:49.494 14:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:21:49.494 14:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:49.494 14:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:49.494 14:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:49.494 14:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:49.494 14:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:49.494 14:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:49.494 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:49.494 14:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:49.494 14:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:49.495 14:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:49.495 14:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:49.495 14:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:21:49.495 14:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:49.495 14:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:49.495 14:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:49.495 14:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:49.495 14:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:49.495 14:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:49.495 14:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:49.495 14:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:49.495 14:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:49.495 14:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:49.495 14:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:21:49.495 14:58:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:51.398 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:51.398 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:21:51.398 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:51.398 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:51.398 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:51.398 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:51.398 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:51.398 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:21:51.398 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:51.398 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:21:51.398 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:21:51.398 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:21:51.398 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:21:51.398 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:21:51.398 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:21:51.398 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:51.398 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:51.398 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:51.398 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:51.398 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:51.398 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:51.398 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:51.398 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:51.398 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:51.398 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:51.398 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:51.398 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:51.398 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:51.398 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:51.398 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:51.398 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:51.398 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:51.398 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:51.398 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:51.398 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:51.398 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:51.398 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:51.398 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:51.398 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:51.398 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:51.398 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:51.398 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:51.398 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:51.398 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:51.398 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:51.398 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:51.398 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:51.398 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:51.398 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:51.398 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:51.398 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:51.398 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:51.398 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:51.398 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:51.398 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:51.398 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:51.398 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:51.398 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:51.398 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:51.398 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:51.398 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:51.398 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:51.398 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:51.398 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:51.398 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:51.398 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:51.398 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:51.398 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:51.398 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:51.398 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:51.398 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:51.398 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:51.398 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:51.398 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:21:51.398 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:51.398 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:51.398 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:51.398 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:51.398 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:51.399 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:51.399 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:51.399 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:51.399 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:51.399 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:51.399 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:51.399 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:51.399 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:51.399 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:51.399 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:51.399 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:51.399 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:51.399 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:51.399 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:51.399 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:51.399 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:51.399 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:51.657 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:51.657 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:51.657 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:51.657 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:51.657 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:51.657 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.310 ms 00:21:51.657 00:21:51.657 --- 10.0.0.2 ping statistics --- 00:21:51.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:51.657 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:21:51.657 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:51.657 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:51.657 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:21:51.657 00:21:51.657 --- 10.0.0.1 ping statistics --- 00:21:51.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:51.657 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:21:51.657 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:51.657 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:21:51.657 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:51.657 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:51.657 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:51.657 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:51.657 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:51.657 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:51.657 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:51.657 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:21:51.657 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:21:51.657 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:51.657 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:51.657 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=737366 00:21:51.657 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:51.657 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:51.657 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 737366 00:21:51.657 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 737366 ']' 00:21:51.657 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:51.657 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:51.657 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:51.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:51.657 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:51.657 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:51.657 [2024-12-11 14:58:34.295874] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:21:51.657 [2024-12-11 14:58:34.295973] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:51.657 [2024-12-11 14:58:34.370956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:51.915 [2024-12-11 14:58:34.432633] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:51.915 [2024-12-11 14:58:34.432681] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:51.915 [2024-12-11 14:58:34.432709] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:51.915 [2024-12-11 14:58:34.432722] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:51.915 [2024-12-11 14:58:34.432739] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:51.915 [2024-12-11 14:58:34.434355] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:21:51.915 [2024-12-11 14:58:34.434418] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:21:51.915 [2024-12-11 14:58:34.434486] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:21:51.915 [2024-12-11 14:58:34.434489] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:21:51.915 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:51.915 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:21:51.915 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:52.173 [2024-12-11 14:58:34.824527] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:52.173 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:21:52.173 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:52.173 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:52.173 14:58:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:21:52.738 Malloc1 00:21:52.738 14:58:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:52.738 14:58:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:52.996 14:58:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:53.254 [2024-12-11 14:58:36.023775] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:53.512 14:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:53.770 14:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:21:53.770 14:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:53.770 14:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:53.770 14:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:53.770 14:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:53.770 14:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:53.770 14:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:53.770 14:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:21:53.770 14:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:53.770 14:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:53.770 14:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:53.770 14:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:21:53.770 14:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:53.770 14:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:21:53.770 14:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:21:53.770 14:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:53.770 14:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:53.770 14:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:21:53.770 14:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:53.770 14:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:21:53.770 14:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:21:53.770 14:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:21:53.770 14:58:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:54.028 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:54.028 fio-3.35 00:21:54.028 Starting 1 thread 00:21:56.556 00:21:56.556 test: (groupid=0, jobs=1): err= 0: pid=737729: Wed Dec 11 14:58:38 2024 00:21:56.556 read: IOPS=7871, BW=30.7MiB/s (32.2MB/s)(61.7MiB/2007msec) 00:21:56.556 slat (nsec): min=1934, max=104863, avg=2488.27, stdev=1508.60 00:21:56.556 clat (usec): min=3006, max=15684, avg=8860.40, stdev=746.77 00:21:56.556 lat (usec): min=3029, max=15687, avg=8862.89, stdev=746.69 00:21:56.556 clat percentiles (usec): 00:21:56.556 | 1.00th=[ 7242], 5.00th=[ 7767], 10.00th=[ 7963], 20.00th=[ 8291], 00:21:56.557 | 30.00th=[ 8455], 40.00th=[ 8717], 50.00th=[ 8848], 60.00th=[ 8979], 00:21:56.557 | 70.00th=[ 9241], 80.00th=[ 9372], 90.00th=[ 9765], 95.00th=[10028], 00:21:56.557 | 99.00th=[10552], 99.50th=[10814], 99.90th=[13960], 99.95th=[14222], 00:21:56.557 | 99.99th=[15533] 00:21:56.557 bw ( KiB/s): min=30392, max=31952, per=99.90%, avg=31454.00, stdev=715.76, samples=4 00:21:56.557 iops : min= 7598, max= 7988, avg=7863.50, stdev=178.94, samples=4 00:21:56.557 write: IOPS=7846, BW=30.6MiB/s (32.1MB/s)(61.5MiB/2007msec); 0 zone resets 00:21:56.557 slat (nsec): min=2038, max=90567, avg=2569.18, stdev=1207.58 00:21:56.557 clat (usec): min=1252, max=13756, avg=7375.40, stdev=635.07 00:21:56.557 lat (usec): min=1258, max=13759, avg=7377.97, stdev=635.05 00:21:56.557 clat percentiles (usec): 00:21:56.557 | 1.00th=[ 5997], 5.00th=[ 6456], 10.00th=[ 6652], 20.00th=[ 6915], 00:21:56.557 | 30.00th=[ 7046], 40.00th=[ 7242], 50.00th=[ 7373], 60.00th=[ 7504], 00:21:56.557 | 70.00th=[ 7701], 80.00th=[ 7832], 90.00th=[ 8094], 95.00th=[ 8356], 00:21:56.557 | 99.00th=[ 8717], 99.50th=[ 8979], 99.90th=[11863], 99.95th=[12780], 00:21:56.557 | 99.99th=[13698] 00:21:56.557 bw ( KiB/s): min=31304, max=31488, per=99.99%, avg=31382.00, stdev=77.94, samples=4 00:21:56.557 iops : min= 7826, max= 7872, avg=7845.50, stdev=19.49, samples=4 00:21:56.557 lat (msec) : 2=0.01%, 4=0.13%, 10=97.19%, 20=2.67% 00:21:56.557 cpu : usr=64.06%, sys=34.45%, ctx=97, majf=0, minf=36 00:21:56.557 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:21:56.557 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:56.557 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:56.557 issued rwts: total=15798,15747,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:56.557 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:56.557 00:21:56.557 Run status group 0 (all jobs): 00:21:56.557 READ: bw=30.7MiB/s (32.2MB/s), 30.7MiB/s-30.7MiB/s (32.2MB/s-32.2MB/s), io=61.7MiB (64.7MB), run=2007-2007msec 00:21:56.557 WRITE: bw=30.6MiB/s (32.1MB/s), 30.6MiB/s-30.6MiB/s (32.1MB/s-32.1MB/s), io=61.5MiB (64.5MB), run=2007-2007msec 00:21:56.557 14:58:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:56.557 14:58:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:56.557 14:58:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:56.557 14:58:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:56.557 14:58:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:56.557 14:58:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:56.557 14:58:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:21:56.557 14:58:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:56.557 14:58:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:56.557 14:58:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:56.557 14:58:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:21:56.557 14:58:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:56.557 14:58:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:21:56.557 14:58:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:21:56.557 14:58:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:56.557 14:58:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:56.557 14:58:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:21:56.557 14:58:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:56.557 14:58:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:21:56.557 14:58:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:21:56.557 14:58:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:21:56.557 14:58:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:56.557 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:21:56.557 fio-3.35 00:21:56.557 Starting 1 thread 00:21:59.087 00:21:59.087 test: (groupid=0, jobs=1): err= 0: pid=738182: Wed Dec 11 14:58:41 2024 00:21:59.087 read: IOPS=8159, BW=127MiB/s (134MB/s)(256MiB/2010msec) 00:21:59.087 slat (nsec): min=2848, max=93905, avg=3730.56, stdev=1580.54 00:21:59.087 clat (usec): min=2130, max=16608, avg=8972.59, stdev=1995.34 00:21:59.087 lat (usec): min=2134, max=16612, avg=8976.32, stdev=1995.38 00:21:59.087 clat percentiles (usec): 00:21:59.087 | 1.00th=[ 4621], 5.00th=[ 5735], 10.00th=[ 6456], 20.00th=[ 7308], 00:21:59.087 | 30.00th=[ 7963], 40.00th=[ 8455], 50.00th=[ 8979], 60.00th=[ 9372], 00:21:59.087 | 70.00th=[ 9896], 80.00th=[10552], 90.00th=[11600], 95.00th=[12387], 00:21:59.087 | 99.00th=[14091], 99.50th=[14484], 99.90th=[15533], 99.95th=[15795], 00:21:59.087 | 99.99th=[16450] 00:21:59.087 bw ( KiB/s): min=57088, max=74624, per=51.05%, avg=66648.00, stdev=8454.03, samples=4 00:21:59.087 iops : min= 3568, max= 4664, avg=4165.50, stdev=528.38, samples=4 00:21:59.087 write: IOPS=4765, BW=74.5MiB/s (78.1MB/s)(136MiB/1828msec); 0 zone resets 00:21:59.087 slat (usec): min=30, max=194, avg=33.68, stdev= 5.79 00:21:59.087 clat (usec): min=6788, max=18883, avg=11993.00, stdev=2137.97 00:21:59.087 lat (usec): min=6818, max=18926, avg=12026.68, stdev=2138.36 00:21:59.087 clat percentiles (usec): 00:21:59.087 | 1.00th=[ 7767], 5.00th=[ 8717], 10.00th=[ 9372], 20.00th=[10028], 00:21:59.087 | 30.00th=[10683], 40.00th=[11207], 50.00th=[11863], 60.00th=[12387], 00:21:59.087 | 70.00th=[13042], 80.00th=[13829], 90.00th=[14877], 95.00th=[15664], 00:21:59.087 | 99.00th=[17433], 99.50th=[17695], 99.90th=[18744], 99.95th=[18744], 00:21:59.087 | 99.99th=[19006] 00:21:59.087 bw ( KiB/s): min=61120, max=78400, per=90.81%, avg=69240.00, stdev=8699.86, samples=4 00:21:59.087 iops : min= 3820, max= 4900, avg=4327.50, stdev=543.74, samples=4 00:21:59.087 lat (msec) : 4=0.21%, 10=53.62%, 20=46.17% 00:21:59.087 cpu : usr=76.36%, sys=22.25%, ctx=39, majf=0, minf=64 00:21:59.087 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:21:59.087 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:59.087 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:59.087 issued rwts: total=16400,8711,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:59.087 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:59.087 00:21:59.087 Run status group 0 (all jobs): 00:21:59.087 READ: bw=127MiB/s (134MB/s), 127MiB/s-127MiB/s (134MB/s-134MB/s), io=256MiB (269MB), run=2010-2010msec 00:21:59.087 WRITE: bw=74.5MiB/s (78.1MB/s), 74.5MiB/s-74.5MiB/s (78.1MB/s-78.1MB/s), io=136MiB (143MB), run=1828-1828msec 00:21:59.087 14:58:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:59.087 14:58:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:21:59.087 14:58:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:21:59.087 14:58:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:21:59.087 14:58:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:21:59.087 14:58:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:59.087 14:58:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:21:59.087 14:58:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:59.087 14:58:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:21:59.087 14:58:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:59.087 14:58:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:59.087 rmmod nvme_tcp 00:21:59.087 rmmod nvme_fabrics 00:21:59.087 rmmod nvme_keyring 00:21:59.087 14:58:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:59.087 14:58:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:21:59.087 14:58:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:21:59.087 14:58:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 737366 ']' 00:21:59.087 14:58:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 737366 00:21:59.087 14:58:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 737366 ']' 00:21:59.087 14:58:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 737366 00:21:59.087 14:58:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:21:59.087 14:58:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:59.345 14:58:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 737366 00:21:59.345 14:58:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:59.345 14:58:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:59.345 14:58:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 737366' 00:21:59.345 killing process with pid 737366 00:21:59.346 14:58:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 737366 00:21:59.346 14:58:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 737366 00:21:59.605 14:58:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:59.605 14:58:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:59.605 14:58:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:59.605 14:58:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:21:59.605 14:58:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:21:59.605 14:58:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:59.605 14:58:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:21:59.605 14:58:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:59.605 14:58:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:59.605 14:58:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:59.605 14:58:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:59.605 14:58:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:01.519 14:58:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:01.519 00:22:01.519 real 0m12.218s 00:22:01.519 user 0m36.325s 00:22:01.519 sys 0m4.065s 00:22:01.519 14:58:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:01.519 14:58:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:01.519 ************************************ 00:22:01.519 END TEST nvmf_fio_host 00:22:01.519 ************************************ 00:22:01.519 14:58:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:01.519 14:58:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:01.519 14:58:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:01.519 14:58:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:01.519 ************************************ 00:22:01.519 START TEST nvmf_failover 00:22:01.519 ************************************ 00:22:01.519 14:58:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:01.519 * Looking for test storage... 00:22:01.519 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:01.519 14:58:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:01.858 14:58:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:22:01.858 14:58:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:01.858 14:58:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:01.858 14:58:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:01.858 14:58:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:01.858 14:58:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:01.858 14:58:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:22:01.858 14:58:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:22:01.858 14:58:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:22:01.858 14:58:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:22:01.858 14:58:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:22:01.858 14:58:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:22:01.858 14:58:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:22:01.858 14:58:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:01.858 14:58:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:22:01.858 14:58:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:22:01.858 14:58:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:01.858 14:58:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:01.858 14:58:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:22:01.858 14:58:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:22:01.858 14:58:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:01.858 14:58:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:22:01.858 14:58:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:22:01.858 14:58:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:22:01.858 14:58:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:22:01.858 14:58:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:01.858 14:58:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:22:01.858 14:58:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:22:01.858 14:58:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:01.858 14:58:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:01.858 14:58:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:22:01.858 14:58:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:01.858 14:58:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:01.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:01.858 --rc genhtml_branch_coverage=1 00:22:01.858 --rc genhtml_function_coverage=1 00:22:01.858 --rc genhtml_legend=1 00:22:01.858 --rc geninfo_all_blocks=1 00:22:01.858 --rc geninfo_unexecuted_blocks=1 00:22:01.858 00:22:01.858 ' 00:22:01.858 14:58:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:01.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:01.858 --rc genhtml_branch_coverage=1 00:22:01.858 --rc genhtml_function_coverage=1 00:22:01.858 --rc genhtml_legend=1 00:22:01.858 --rc geninfo_all_blocks=1 00:22:01.858 --rc geninfo_unexecuted_blocks=1 00:22:01.858 00:22:01.858 ' 00:22:01.858 14:58:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:01.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:01.858 --rc genhtml_branch_coverage=1 00:22:01.858 --rc genhtml_function_coverage=1 00:22:01.858 --rc genhtml_legend=1 00:22:01.858 --rc geninfo_all_blocks=1 00:22:01.858 --rc geninfo_unexecuted_blocks=1 00:22:01.858 00:22:01.858 ' 00:22:01.858 14:58:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:01.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:01.858 --rc genhtml_branch_coverage=1 00:22:01.858 --rc genhtml_function_coverage=1 00:22:01.858 --rc genhtml_legend=1 00:22:01.858 --rc geninfo_all_blocks=1 00:22:01.858 --rc geninfo_unexecuted_blocks=1 00:22:01.858 00:22:01.858 ' 00:22:01.858 14:58:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:01.858 14:58:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:22:01.858 14:58:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:01.858 14:58:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:01.858 14:58:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:01.858 14:58:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:01.858 14:58:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:01.858 14:58:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:01.858 14:58:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:01.858 14:58:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:01.858 14:58:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:01.858 14:58:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:01.858 14:58:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:01.858 14:58:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:01.858 14:58:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:01.858 14:58:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:01.858 14:58:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:01.858 14:58:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:01.859 14:58:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:01.859 14:58:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:22:01.859 14:58:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:01.859 14:58:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:01.859 14:58:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:01.859 14:58:44 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.859 14:58:44 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.859 14:58:44 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.859 14:58:44 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:22:01.859 14:58:44 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.859 14:58:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:22:01.859 14:58:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:01.859 14:58:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:01.859 14:58:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:01.859 14:58:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:01.859 14:58:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:01.859 14:58:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:01.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:01.859 14:58:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:01.859 14:58:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:01.859 14:58:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:01.859 14:58:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:01.859 14:58:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:01.859 14:58:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:01.859 14:58:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:01.859 14:58:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:22:01.859 14:58:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:01.859 14:58:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:01.859 14:58:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:01.859 14:58:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:01.859 14:58:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:01.859 14:58:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:01.859 14:58:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:01.859 14:58:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:01.859 14:58:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:01.859 14:58:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:01.859 14:58:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:22:01.859 14:58:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:03.819 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:03.819 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:22:03.819 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:03.819 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:03.819 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:03.819 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:03.819 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:03.819 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:22:03.819 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:03.819 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:22:03.819 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:22:03.819 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:22:03.819 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:22:03.819 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:22:03.819 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:22:03.819 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:03.819 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:03.819 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:03.819 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:03.819 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:03.819 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:03.819 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:03.819 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:03.819 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:03.819 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:03.819 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:03.819 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:03.819 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:03.819 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:03.819 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:03.819 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:03.819 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:03.819 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:03.819 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:03.819 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:03.819 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:03.819 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:03.819 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:03.819 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:03.819 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:03.819 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:03.819 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:03.819 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:03.819 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:03.819 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:03.819 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:03.819 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:03.819 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:03.819 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:03.819 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:03.819 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:03.819 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:03.819 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:03.819 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:03.819 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:03.819 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:03.819 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:03.819 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:03.819 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:03.819 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:03.819 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:03.819 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:03.819 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:03.819 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:03.819 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:03.819 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:03.820 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:03.820 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:03.820 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:03.820 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:03.820 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:03.820 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:03.820 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:03.820 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:22:03.820 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:03.820 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:03.820 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:03.820 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:03.820 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:03.820 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:03.820 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:03.820 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:03.820 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:03.820 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:03.820 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:03.820 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:03.820 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:03.820 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:03.820 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:03.820 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:03.820 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:03.820 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:03.820 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:03.820 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:04.080 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:04.080 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:04.080 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:04.080 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:04.080 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:04.080 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:04.080 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:04.080 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.230 ms 00:22:04.080 00:22:04.080 --- 10.0.0.2 ping statistics --- 00:22:04.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:04.080 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:22:04.080 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:04.080 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:04.080 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:22:04.080 00:22:04.080 --- 10.0.0.1 ping statistics --- 00:22:04.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:04.080 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:22:04.080 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:04.080 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:22:04.080 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:04.080 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:04.080 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:04.080 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:04.080 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:04.080 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:04.080 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:04.080 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:22:04.080 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:04.080 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:04.080 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:04.080 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=740391 00:22:04.080 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:04.080 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 740391 00:22:04.080 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 740391 ']' 00:22:04.080 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:04.080 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:04.080 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:04.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:04.080 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:04.080 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:04.080 [2024-12-11 14:58:46.727431] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:22:04.080 [2024-12-11 14:58:46.727516] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:04.080 [2024-12-11 14:58:46.798742] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:04.339 [2024-12-11 14:58:46.852901] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:04.339 [2024-12-11 14:58:46.852953] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:04.339 [2024-12-11 14:58:46.852981] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:04.339 [2024-12-11 14:58:46.852992] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:04.339 [2024-12-11 14:58:46.853002] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:04.339 [2024-12-11 14:58:46.854574] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:22:04.339 [2024-12-11 14:58:46.854632] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:22:04.339 [2024-12-11 14:58:46.854629] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:22:04.339 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:04.339 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:22:04.339 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:04.339 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:04.339 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:04.339 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:04.339 14:58:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:04.597 [2024-12-11 14:58:47.261801] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:04.597 14:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:04.854 Malloc0 00:22:04.854 14:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:05.420 14:58:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:05.677 14:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:05.935 [2024-12-11 14:58:48.479675] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:05.935 14:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:06.192 [2024-12-11 14:58:48.744384] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:06.192 14:58:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:06.450 [2024-12-11 14:58:49.009279] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:06.450 14:58:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=740677 00:22:06.450 14:58:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:22:06.450 14:58:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:06.450 14:58:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 740677 /var/tmp/bdevperf.sock 00:22:06.450 14:58:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 740677 ']' 00:22:06.450 14:58:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:06.450 14:58:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:06.450 14:58:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:06.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:06.450 14:58:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:06.450 14:58:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:06.708 14:58:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:06.708 14:58:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:22:06.708 14:58:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:07.273 NVMe0n1 00:22:07.273 14:58:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:07.531 00:22:07.531 14:58:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=740820 00:22:07.531 14:58:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:07.531 14:58:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:22:08.906 14:58:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:08.906 14:58:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:22:12.188 14:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:12.188 00:22:12.188 14:58:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:12.446 [2024-12-11 14:58:55.130381] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20058b0 is same with the state(6) to be set 00:22:12.446 [2024-12-11 14:58:55.130450] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20058b0 is same with the state(6) to be set 00:22:12.446 [2024-12-11 14:58:55.130466] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20058b0 is same with the state(6) to be set 00:22:12.446 [2024-12-11 14:58:55.130479] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20058b0 is same with the state(6) to be set 00:22:12.446 [2024-12-11 14:58:55.130491] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20058b0 is same with the state(6) to be set 00:22:12.446 [2024-12-11 14:58:55.130503] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20058b0 is same with the state(6) to be set 00:22:12.446 [2024-12-11 14:58:55.130515] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20058b0 is same with the state(6) to be set 00:22:12.446 [2024-12-11 14:58:55.130526] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20058b0 is same with the state(6) to be set 00:22:12.446 14:58:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:22:15.728 14:58:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:15.728 [2024-12-11 14:58:58.410071] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:15.728 14:58:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:22:17.102 14:58:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:17.102 [2024-12-11 14:58:59.728406] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ecaf60 is same with the state(6) to be set 00:22:17.102 [2024-12-11 14:58:59.728481] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ecaf60 is same with the state(6) to be set 00:22:17.102 [2024-12-11 14:58:59.728496] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ecaf60 is same with the state(6) to be set 00:22:17.102 [2024-12-11 14:58:59.728508] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ecaf60 is same with the state(6) to be set 00:22:17.102 [2024-12-11 14:58:59.728519] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ecaf60 is same with the state(6) to be set 00:22:17.102 [2024-12-11 14:58:59.728542] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ecaf60 is same with the state(6) to be set 00:22:17.102 [2024-12-11 14:58:59.728582] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ecaf60 is same with the state(6) to be set 00:22:17.102 [2024-12-11 14:58:59.728595] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ecaf60 is same with the state(6) to be set 00:22:17.102 [2024-12-11 14:58:59.728607] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ecaf60 is same with the state(6) to be set 00:22:17.102 [2024-12-11 14:58:59.728618] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ecaf60 is same with the state(6) to be set 00:22:17.102 [2024-12-11 14:58:59.728630] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ecaf60 is same with the state(6) to be set 00:22:17.102 [2024-12-11 14:58:59.728642] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ecaf60 is same with the state(6) to be set 00:22:17.102 [2024-12-11 14:58:59.728654] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ecaf60 is same with the state(6) to be set 00:22:17.102 [2024-12-11 14:58:59.728666] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ecaf60 is same with the state(6) to be set 00:22:17.102 [2024-12-11 14:58:59.728677] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ecaf60 is same with the state(6) to be set 00:22:17.102 [2024-12-11 14:58:59.728689] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ecaf60 is same with the state(6) to be set 00:22:17.102 [2024-12-11 14:58:59.728700] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ecaf60 is same with the state(6) to be set 00:22:17.102 [2024-12-11 14:58:59.728712] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ecaf60 is same with the state(6) to be set 00:22:17.102 [2024-12-11 14:58:59.728723] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ecaf60 is same with the state(6) to be set 00:22:17.103 [2024-12-11 14:58:59.728735] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ecaf60 is same with the state(6) to be set 00:22:17.103 [2024-12-11 14:58:59.728746] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ecaf60 is same with the state(6) to be set 00:22:17.103 [2024-12-11 14:58:59.728758] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ecaf60 is same with the state(6) to be set 00:22:17.103 [2024-12-11 14:58:59.728770] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ecaf60 is same with the state(6) to be set 00:22:17.103 14:58:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 740820 00:22:23.669 { 00:22:23.669 "results": [ 00:22:23.669 { 00:22:23.669 "job": "NVMe0n1", 00:22:23.669 "core_mask": "0x1", 00:22:23.669 "workload": "verify", 00:22:23.669 "status": "finished", 00:22:23.669 "verify_range": { 00:22:23.669 "start": 0, 00:22:23.669 "length": 16384 00:22:23.669 }, 00:22:23.669 "queue_depth": 128, 00:22:23.669 "io_size": 4096, 00:22:23.669 "runtime": 15.006529, 00:22:23.669 "iops": 8381.818340536976, 00:22:23.669 "mibps": 32.741477892722564, 00:22:23.669 "io_failed": 8381, 00:22:23.669 "io_timeout": 0, 00:22:23.669 "avg_latency_us": 14288.328606799745, 00:22:23.669 "min_latency_us": 533.997037037037, 00:22:23.669 "max_latency_us": 19806.435555555556 00:22:23.669 } 00:22:23.669 ], 00:22:23.669 "core_count": 1 00:22:23.669 } 00:22:23.669 14:59:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 740677 00:22:23.669 14:59:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 740677 ']' 00:22:23.669 14:59:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 740677 00:22:23.669 14:59:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:22:23.669 14:59:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:23.669 14:59:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 740677 00:22:23.669 14:59:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:23.669 14:59:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:23.669 14:59:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 740677' 00:22:23.669 killing process with pid 740677 00:22:23.669 14:59:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 740677 00:22:23.669 14:59:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 740677 00:22:23.669 14:59:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:23.669 [2024-12-11 14:58:49.085031] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:22:23.669 [2024-12-11 14:58:49.085143] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid740677 ] 00:22:23.669 [2024-12-11 14:58:49.168970] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:23.669 [2024-12-11 14:58:49.229004] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:22:23.669 Running I/O for 15 seconds... 00:22:23.669 8382.00 IOPS, 32.74 MiB/s [2024-12-11T13:59:06.442Z] [2024-12-11 14:58:51.488051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:77648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.669 [2024-12-11 14:58:51.488122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.669 [2024-12-11 14:58:51.488150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:77656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.669 [2024-12-11 14:58:51.488165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.669 [2024-12-11 14:58:51.488181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:77664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.669 [2024-12-11 14:58:51.488195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.669 [2024-12-11 14:58:51.488210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:77672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.669 [2024-12-11 14:58:51.488224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.669 [2024-12-11 14:58:51.488239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:77680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.669 [2024-12-11 14:58:51.488252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.669 [2024-12-11 14:58:51.488267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:77688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.669 [2024-12-11 14:58:51.488281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.669 [2024-12-11 14:58:51.488295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:77696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.669 [2024-12-11 14:58:51.488308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.669 [2024-12-11 14:58:51.488323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:77704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.669 [2024-12-11 14:58:51.488336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.669 [2024-12-11 14:58:51.488351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:77712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.669 [2024-12-11 14:58:51.488365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.669 [2024-12-11 14:58:51.488380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:77720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.669 [2024-12-11 14:58:51.488393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.669 [2024-12-11 14:58:51.488408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:77728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.669 [2024-12-11 14:58:51.488421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.669 [2024-12-11 14:58:51.488449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:77736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.669 [2024-12-11 14:58:51.488463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.669 [2024-12-11 14:58:51.488478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:77744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.669 [2024-12-11 14:58:51.488491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.669 [2024-12-11 14:58:51.488506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:77752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.669 [2024-12-11 14:58:51.488519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.669 [2024-12-11 14:58:51.488565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:77760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.669 [2024-12-11 14:58:51.488581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.669 [2024-12-11 14:58:51.488595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:77768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.669 [2024-12-11 14:58:51.488609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.669 [2024-12-11 14:58:51.488625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:77776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.669 [2024-12-11 14:58:51.488639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.669 [2024-12-11 14:58:51.488653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:77784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.669 [2024-12-11 14:58:51.488667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.669 [2024-12-11 14:58:51.488682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:77792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.669 [2024-12-11 14:58:51.488696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.670 [2024-12-11 14:58:51.488710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:77800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.670 [2024-12-11 14:58:51.488724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.670 [2024-12-11 14:58:51.488739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:77808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.670 [2024-12-11 14:58:51.488752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.670 [2024-12-11 14:58:51.488767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:77816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.670 [2024-12-11 14:58:51.488781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.670 [2024-12-11 14:58:51.488795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:77824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.670 [2024-12-11 14:58:51.488809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.670 [2024-12-11 14:58:51.488830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:77832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.670 [2024-12-11 14:58:51.488863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.670 [2024-12-11 14:58:51.488880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:77840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.670 [2024-12-11 14:58:51.488893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.670 [2024-12-11 14:58:51.488908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:77848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.670 [2024-12-11 14:58:51.488921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.670 [2024-12-11 14:58:51.488935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:77856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.670 [2024-12-11 14:58:51.488948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.670 [2024-12-11 14:58:51.488962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:77864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.670 [2024-12-11 14:58:51.488976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.670 [2024-12-11 14:58:51.488990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:77872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.670 [2024-12-11 14:58:51.489003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.670 [2024-12-11 14:58:51.489018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:77880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.670 [2024-12-11 14:58:51.489031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.670 [2024-12-11 14:58:51.489045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:77888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.670 [2024-12-11 14:58:51.489058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.670 [2024-12-11 14:58:51.489072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:77896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.670 [2024-12-11 14:58:51.489085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.670 [2024-12-11 14:58:51.489100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:77904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.670 [2024-12-11 14:58:51.489114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.670 [2024-12-11 14:58:51.489128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:77912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.670 [2024-12-11 14:58:51.489140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.670 [2024-12-11 14:58:51.489155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:77920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.670 [2024-12-11 14:58:51.489168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.670 [2024-12-11 14:58:51.489182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:77928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.670 [2024-12-11 14:58:51.489194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.670 [2024-12-11 14:58:51.489212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:77936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.670 [2024-12-11 14:58:51.489226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.670 [2024-12-11 14:58:51.489240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:77944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.670 [2024-12-11 14:58:51.489253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.670 [2024-12-11 14:58:51.489267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:77952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.670 [2024-12-11 14:58:51.489280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.670 [2024-12-11 14:58:51.489295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:77960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.670 [2024-12-11 14:58:51.489308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.670 [2024-12-11 14:58:51.489337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:77968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.670 [2024-12-11 14:58:51.489351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.670 [2024-12-11 14:58:51.489365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:77976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.670 [2024-12-11 14:58:51.489378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.670 [2024-12-11 14:58:51.489393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:77984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.670 [2024-12-11 14:58:51.489407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.670 [2024-12-11 14:58:51.489422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:77992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.670 [2024-12-11 14:58:51.489435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.670 [2024-12-11 14:58:51.489449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:78000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.670 [2024-12-11 14:58:51.489463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.670 [2024-12-11 14:58:51.489477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:78008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.670 [2024-12-11 14:58:51.489490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.670 [2024-12-11 14:58:51.489505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:78016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.670 [2024-12-11 14:58:51.489519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.670 [2024-12-11 14:58:51.489538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:78024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.670 [2024-12-11 14:58:51.489560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.670 [2024-12-11 14:58:51.489577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:78032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.670 [2024-12-11 14:58:51.489591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.670 [2024-12-11 14:58:51.489610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:78040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.670 [2024-12-11 14:58:51.489625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.670 [2024-12-11 14:58:51.489639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:78048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.670 [2024-12-11 14:58:51.489653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.670 [2024-12-11 14:58:51.489667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:78056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.670 [2024-12-11 14:58:51.489681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.670 [2024-12-11 14:58:51.489695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:78064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.670 [2024-12-11 14:58:51.489710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.670 [2024-12-11 14:58:51.489725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:78072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.670 [2024-12-11 14:58:51.489738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.670 [2024-12-11 14:58:51.489753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:78080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.670 [2024-12-11 14:58:51.489766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.670 [2024-12-11 14:58:51.489781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:78088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.670 [2024-12-11 14:58:51.489794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.670 [2024-12-11 14:58:51.489809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:77400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.670 [2024-12-11 14:58:51.489827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.670 [2024-12-11 14:58:51.489842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:77408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.670 [2024-12-11 14:58:51.489855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.670 [2024-12-11 14:58:51.489870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:77416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.670 [2024-12-11 14:58:51.489883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.670 [2024-12-11 14:58:51.489898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:77424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.670 [2024-12-11 14:58:51.489911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.671 [2024-12-11 14:58:51.489926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:77432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.671 [2024-12-11 14:58:51.489940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.671 [2024-12-11 14:58:51.489955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:77440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.671 [2024-12-11 14:58:51.489972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.671 [2024-12-11 14:58:51.489988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:77448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.671 [2024-12-11 14:58:51.490001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.671 [2024-12-11 14:58:51.490016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:78096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.671 [2024-12-11 14:58:51.490029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.671 [2024-12-11 14:58:51.490045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:78104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.671 [2024-12-11 14:58:51.490059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.671 [2024-12-11 14:58:51.490073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:78112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.671 [2024-12-11 14:58:51.490087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.671 [2024-12-11 14:58:51.490101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:78120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.671 [2024-12-11 14:58:51.490115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.671 [2024-12-11 14:58:51.490130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:78128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.671 [2024-12-11 14:58:51.490144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.671 [2024-12-11 14:58:51.490159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:78136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.671 [2024-12-11 14:58:51.490172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.671 [2024-12-11 14:58:51.490187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:78144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.671 [2024-12-11 14:58:51.490200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.671 [2024-12-11 14:58:51.490215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.671 [2024-12-11 14:58:51.490229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.671 [2024-12-11 14:58:51.490243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:78160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.671 [2024-12-11 14:58:51.490257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.671 [2024-12-11 14:58:51.490272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:78168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.671 [2024-12-11 14:58:51.490286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.671 [2024-12-11 14:58:51.490301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:78176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.671 [2024-12-11 14:58:51.490314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.671 [2024-12-11 14:58:51.490333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:78184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.671 [2024-12-11 14:58:51.490347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.671 [2024-12-11 14:58:51.490362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:78192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.671 [2024-12-11 14:58:51.490375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.671 [2024-12-11 14:58:51.490390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:78200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.671 [2024-12-11 14:58:51.490403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.671 [2024-12-11 14:58:51.490418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:78208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.671 [2024-12-11 14:58:51.490431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.671 [2024-12-11 14:58:51.490446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:78216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.671 [2024-12-11 14:58:51.490460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.671 [2024-12-11 14:58:51.490474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:78224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.671 [2024-12-11 14:58:51.490488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.671 [2024-12-11 14:58:51.490509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:78232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.671 [2024-12-11 14:58:51.490524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.671 [2024-12-11 14:58:51.490538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:78240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.671 [2024-12-11 14:58:51.490560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.671 [2024-12-11 14:58:51.490575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:78248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.671 [2024-12-11 14:58:51.490590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.671 [2024-12-11 14:58:51.490605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:78256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.671 [2024-12-11 14:58:51.490619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.671 [2024-12-11 14:58:51.490633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:78264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.671 [2024-12-11 14:58:51.490647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.671 [2024-12-11 14:58:51.490661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:78272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.671 [2024-12-11 14:58:51.490675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.671 [2024-12-11 14:58:51.490690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:78280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.671 [2024-12-11 14:58:51.490707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.671 [2024-12-11 14:58:51.490723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:78288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.671 [2024-12-11 14:58:51.490736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.671 [2024-12-11 14:58:51.490750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:78296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.671 [2024-12-11 14:58:51.490764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.671 [2024-12-11 14:58:51.490779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:78304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.671 [2024-12-11 14:58:51.490793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.671 [2024-12-11 14:58:51.490807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:78312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.671 [2024-12-11 14:58:51.490829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.671 [2024-12-11 14:58:51.490844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:78320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.671 [2024-12-11 14:58:51.490858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.671 [2024-12-11 14:58:51.490872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:78328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.671 [2024-12-11 14:58:51.490885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.671 [2024-12-11 14:58:51.490899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:78336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.671 [2024-12-11 14:58:51.490913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.671 [2024-12-11 14:58:51.490927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:78344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.671 [2024-12-11 14:58:51.490941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.671 [2024-12-11 14:58:51.490969] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:23.671 [2024-12-11 14:58:51.490986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78352 len:8 PRP1 0x0 PRP2 0x0 00:22:23.671 [2024-12-11 14:58:51.491005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.671 [2024-12-11 14:58:51.491022] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:23.671 [2024-12-11 14:58:51.491034] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:23.671 [2024-12-11 14:58:51.491044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78360 len:8 PRP1 0x0 PRP2 0x0 00:22:23.671 [2024-12-11 14:58:51.491056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.671 [2024-12-11 14:58:51.491070] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:23.671 [2024-12-11 14:58:51.491080] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:23.671 [2024-12-11 14:58:51.491091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78368 len:8 PRP1 0x0 PRP2 0x0 00:22:23.671 [2024-12-11 14:58:51.491103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.671 [2024-12-11 14:58:51.491120] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:23.671 [2024-12-11 14:58:51.491131] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:23.671 [2024-12-11 14:58:51.491142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78376 len:8 PRP1 0x0 PRP2 0x0 00:22:23.672 [2024-12-11 14:58:51.491154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.672 [2024-12-11 14:58:51.491166] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:23.672 [2024-12-11 14:58:51.491177] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:23.672 [2024-12-11 14:58:51.491188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78384 len:8 PRP1 0x0 PRP2 0x0 00:22:23.672 [2024-12-11 14:58:51.491200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.672 [2024-12-11 14:58:51.491213] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:23.672 [2024-12-11 14:58:51.491223] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:23.672 [2024-12-11 14:58:51.491233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78392 len:8 PRP1 0x0 PRP2 0x0 00:22:23.672 [2024-12-11 14:58:51.491245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.672 [2024-12-11 14:58:51.491258] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:23.672 [2024-12-11 14:58:51.491268] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:23.672 [2024-12-11 14:58:51.491278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78400 len:8 PRP1 0x0 PRP2 0x0 00:22:23.672 [2024-12-11 14:58:51.491290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.672 [2024-12-11 14:58:51.491303] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:23.672 [2024-12-11 14:58:51.491313] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:23.672 [2024-12-11 14:58:51.491324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78408 len:8 PRP1 0x0 PRP2 0x0 00:22:23.672 [2024-12-11 14:58:51.491336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.672 [2024-12-11 14:58:51.491348] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:23.672 [2024-12-11 14:58:51.491359] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:23.672 [2024-12-11 14:58:51.491369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78416 len:8 PRP1 0x0 PRP2 0x0 00:22:23.672 [2024-12-11 14:58:51.491387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.672 [2024-12-11 14:58:51.491400] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:23.672 [2024-12-11 14:58:51.491411] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:23.672 [2024-12-11 14:58:51.491422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77456 len:8 PRP1 0x0 PRP2 0x0 00:22:23.672 [2024-12-11 14:58:51.491434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.672 [2024-12-11 14:58:51.491446] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:23.672 [2024-12-11 14:58:51.491457] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:23.672 [2024-12-11 14:58:51.491471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77464 len:8 PRP1 0x0 PRP2 0x0 00:22:23.672 [2024-12-11 14:58:51.491483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.672 [2024-12-11 14:58:51.491496] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:23.672 [2024-12-11 14:58:51.491507] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:23.672 [2024-12-11 14:58:51.491517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77472 len:8 PRP1 0x0 PRP2 0x0 00:22:23.672 [2024-12-11 14:58:51.491536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.672 [2024-12-11 14:58:51.491559] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:23.672 [2024-12-11 14:58:51.491571] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:23.672 [2024-12-11 14:58:51.491582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77480 len:8 PRP1 0x0 PRP2 0x0 00:22:23.672 [2024-12-11 14:58:51.491594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.672 [2024-12-11 14:58:51.491607] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:23.672 [2024-12-11 14:58:51.491618] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:23.672 [2024-12-11 14:58:51.491628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77488 len:8 PRP1 0x0 PRP2 0x0 00:22:23.672 [2024-12-11 14:58:51.491640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.672 [2024-12-11 14:58:51.491653] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:23.672 [2024-12-11 14:58:51.491663] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:23.672 [2024-12-11 14:58:51.491674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77496 len:8 PRP1 0x0 PRP2 0x0 00:22:23.672 [2024-12-11 14:58:51.491686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.672 [2024-12-11 14:58:51.491699] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:23.672 [2024-12-11 14:58:51.491709] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:23.672 [2024-12-11 14:58:51.491720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77504 len:8 PRP1 0x0 PRP2 0x0 00:22:23.672 [2024-12-11 14:58:51.491732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.672 [2024-12-11 14:58:51.491744] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:23.672 [2024-12-11 14:58:51.491754] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:23.672 [2024-12-11 14:58:51.491764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77512 len:8 PRP1 0x0 PRP2 0x0 00:22:23.672 [2024-12-11 14:58:51.491777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.672 [2024-12-11 14:58:51.491789] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:23.672 [2024-12-11 14:58:51.491800] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:23.672 [2024-12-11 14:58:51.491810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77520 len:8 PRP1 0x0 PRP2 0x0 00:22:23.672 [2024-12-11 14:58:51.491822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.672 [2024-12-11 14:58:51.491834] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:23.672 [2024-12-11 14:58:51.491848] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:23.672 [2024-12-11 14:58:51.491860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77528 len:8 PRP1 0x0 PRP2 0x0 00:22:23.672 [2024-12-11 14:58:51.491871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.672 [2024-12-11 14:58:51.491884] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:23.672 [2024-12-11 14:58:51.491894] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:23.672 [2024-12-11 14:58:51.491904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77536 len:8 PRP1 0x0 PRP2 0x0 00:22:23.672 [2024-12-11 14:58:51.491916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.672 [2024-12-11 14:58:51.491928] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:23.672 [2024-12-11 14:58:51.491938] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:23.672 [2024-12-11 14:58:51.491949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77544 len:8 PRP1 0x0 PRP2 0x0 00:22:23.672 [2024-12-11 14:58:51.491960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.672 [2024-12-11 14:58:51.491972] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:23.672 [2024-12-11 14:58:51.491983] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:23.672 [2024-12-11 14:58:51.491993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77552 len:8 PRP1 0x0 PRP2 0x0 00:22:23.672 [2024-12-11 14:58:51.492005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.672 [2024-12-11 14:58:51.492017] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:23.672 [2024-12-11 14:58:51.492028] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:23.672 [2024-12-11 14:58:51.492038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77560 len:8 PRP1 0x0 PRP2 0x0 00:22:23.672 [2024-12-11 14:58:51.492050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.672 [2024-12-11 14:58:51.492062] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:23.672 [2024-12-11 14:58:51.492072] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:23.672 [2024-12-11 14:58:51.492083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77568 len:8 PRP1 0x0 PRP2 0x0 00:22:23.672 [2024-12-11 14:58:51.492094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.672 [2024-12-11 14:58:51.492107] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:23.672 [2024-12-11 14:58:51.492117] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:23.672 [2024-12-11 14:58:51.492128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77576 len:8 PRP1 0x0 PRP2 0x0 00:22:23.672 [2024-12-11 14:58:51.492140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.672 [2024-12-11 14:58:51.492153] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:23.672 [2024-12-11 14:58:51.492164] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:23.672 [2024-12-11 14:58:51.492174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77584 len:8 PRP1 0x0 PRP2 0x0 00:22:23.672 [2024-12-11 14:58:51.492186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.672 [2024-12-11 14:58:51.492203] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:23.672 [2024-12-11 14:58:51.492214] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:23.672 [2024-12-11 14:58:51.492224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77592 len:8 PRP1 0x0 PRP2 0x0 00:22:23.672 [2024-12-11 14:58:51.492236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.673 [2024-12-11 14:58:51.492252] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:23.673 [2024-12-11 14:58:51.492263] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:23.673 [2024-12-11 14:58:51.492274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77600 len:8 PRP1 0x0 PRP2 0x0 00:22:23.673 [2024-12-11 14:58:51.492286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.673 [2024-12-11 14:58:51.492299] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:23.673 [2024-12-11 14:58:51.492310] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:23.673 [2024-12-11 14:58:51.492321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77608 len:8 PRP1 0x0 PRP2 0x0 00:22:23.673 [2024-12-11 14:58:51.492334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.673 [2024-12-11 14:58:51.492346] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:23.673 [2024-12-11 14:58:51.492357] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:23.673 [2024-12-11 14:58:51.492368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77616 len:8 PRP1 0x0 PRP2 0x0 00:22:23.673 [2024-12-11 14:58:51.492380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.673 [2024-12-11 14:58:51.492393] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:23.673 [2024-12-11 14:58:51.492404] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:23.673 [2024-12-11 14:58:51.492415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77624 len:8 PRP1 0x0 PRP2 0x0 00:22:23.673 [2024-12-11 14:58:51.492427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.673 [2024-12-11 14:58:51.492440] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:23.673 [2024-12-11 14:58:51.492451] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:23.673 [2024-12-11 14:58:51.492462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77632 len:8 PRP1 0x0 PRP2 0x0 00:22:23.673 [2024-12-11 14:58:51.492474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.673 [2024-12-11 14:58:51.492487] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:23.673 [2024-12-11 14:58:51.492497] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:23.673 [2024-12-11 14:58:51.492508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77640 len:8 PRP1 0x0 PRP2 0x0 00:22:23.673 [2024-12-11 14:58:51.492521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.673 [2024-12-11 14:58:51.492613] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:22:23.673 [2024-12-11 14:58:51.492653] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.673 [2024-12-11 14:58:51.492676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.673 [2024-12-11 14:58:51.492691] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.673 [2024-12-11 14:58:51.492704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.673 [2024-12-11 14:58:51.492718] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.673 [2024-12-11 14:58:51.492730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.673 [2024-12-11 14:58:51.492743] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.673 [2024-12-11 14:58:51.492756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.673 [2024-12-11 14:58:51.492769] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:23.673 [2024-12-11 14:58:51.496067] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:23.673 [2024-12-11 14:58:51.496107] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x142d230 (9): Bad file descriptor 00:22:23.673 [2024-12-11 14:58:51.518450] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:22:23.673 8402.00 IOPS, 32.82 MiB/s [2024-12-11T13:59:06.446Z] 8454.67 IOPS, 33.03 MiB/s [2024-12-11T13:59:06.446Z] 8478.75 IOPS, 33.12 MiB/s [2024-12-11T13:59:06.446Z] [2024-12-11 14:58:55.131604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:71520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.673 [2024-12-11 14:58:55.131649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.673 [2024-12-11 14:58:55.131678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:71656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.673 [2024-12-11 14:58:55.131693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.673 [2024-12-11 14:58:55.131711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:71664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.673 [2024-12-11 14:58:55.131725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.673 [2024-12-11 14:58:55.131741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:71672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.673 [2024-12-11 14:58:55.131754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.673 [2024-12-11 14:58:55.131769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:71680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.673 [2024-12-11 14:58:55.131783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.673 [2024-12-11 14:58:55.131798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:71688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.673 [2024-12-11 14:58:55.131813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.673 [2024-12-11 14:58:55.131828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:71696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.673 [2024-12-11 14:58:55.131852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.673 [2024-12-11 14:58:55.131867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:71704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.673 [2024-12-11 14:58:55.131892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.673 [2024-12-11 14:58:55.131907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:71712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.673 [2024-12-11 14:58:55.131921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.673 [2024-12-11 14:58:55.131936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:71720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.673 [2024-12-11 14:58:55.131950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.673 [2024-12-11 14:58:55.131964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:71728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.673 [2024-12-11 14:58:55.131978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.673 [2024-12-11 14:58:55.131993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:71736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.673 [2024-12-11 14:58:55.132006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.673 [2024-12-11 14:58:55.132021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:71744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.673 [2024-12-11 14:58:55.132034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.673 [2024-12-11 14:58:55.132049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:71752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.673 [2024-12-11 14:58:55.132063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.673 [2024-12-11 14:58:55.132078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:71760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.673 [2024-12-11 14:58:55.132092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.673 [2024-12-11 14:58:55.132106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.673 [2024-12-11 14:58:55.132120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.673 [2024-12-11 14:58:55.132134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:71776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.674 [2024-12-11 14:58:55.132147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.674 [2024-12-11 14:58:55.132162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:71784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.674 [2024-12-11 14:58:55.132175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.674 [2024-12-11 14:58:55.132190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:71792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.674 [2024-12-11 14:58:55.132203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.674 [2024-12-11 14:58:55.132218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:71800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.674 [2024-12-11 14:58:55.132232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.674 [2024-12-11 14:58:55.132251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:71808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.674 [2024-12-11 14:58:55.132265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.674 [2024-12-11 14:58:55.132279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:71816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.674 [2024-12-11 14:58:55.132293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.674 [2024-12-11 14:58:55.132308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:71824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.674 [2024-12-11 14:58:55.132322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.674 [2024-12-11 14:58:55.132336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:71832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.674 [2024-12-11 14:58:55.132349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.674 [2024-12-11 14:58:55.132364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:71840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.674 [2024-12-11 14:58:55.132378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.674 [2024-12-11 14:58:55.132392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:71848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.674 [2024-12-11 14:58:55.132405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.674 [2024-12-11 14:58:55.132420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:71856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.674 [2024-12-11 14:58:55.132434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.674 [2024-12-11 14:58:55.132448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:71864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.674 [2024-12-11 14:58:55.132462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.674 [2024-12-11 14:58:55.132477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:71872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.674 [2024-12-11 14:58:55.132490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.674 [2024-12-11 14:58:55.132505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:71880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.674 [2024-12-11 14:58:55.132518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.674 [2024-12-11 14:58:55.132533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:71888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.674 [2024-12-11 14:58:55.132554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.674 [2024-12-11 14:58:55.132571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:71896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.674 [2024-12-11 14:58:55.132585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.674 [2024-12-11 14:58:55.132600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:71904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.674 [2024-12-11 14:58:55.132617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.674 [2024-12-11 14:58:55.132632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:71912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.674 [2024-12-11 14:58:55.132646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.674 [2024-12-11 14:58:55.132661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:71920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.674 [2024-12-11 14:58:55.132675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.674 [2024-12-11 14:58:55.132689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:71928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.674 [2024-12-11 14:58:55.132702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.674 [2024-12-11 14:58:55.132717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:71936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.674 [2024-12-11 14:58:55.132732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.674 [2024-12-11 14:58:55.132747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:71944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.674 [2024-12-11 14:58:55.132760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.674 [2024-12-11 14:58:55.132776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:71952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.674 [2024-12-11 14:58:55.132789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.674 [2024-12-11 14:58:55.132804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:71960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.674 [2024-12-11 14:58:55.132818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.674 [2024-12-11 14:58:55.132842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:71968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.674 [2024-12-11 14:58:55.132856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.674 [2024-12-11 14:58:55.132870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:71528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.674 [2024-12-11 14:58:55.132884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.674 [2024-12-11 14:58:55.132907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:71536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.674 [2024-12-11 14:58:55.132920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.674 [2024-12-11 14:58:55.132935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:71544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.674 [2024-12-11 14:58:55.132948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.674 [2024-12-11 14:58:55.132962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:71552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.674 [2024-12-11 14:58:55.132976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.674 [2024-12-11 14:58:55.132991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:71560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.674 [2024-12-11 14:58:55.133008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.674 [2024-12-11 14:58:55.133024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:71568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.674 [2024-12-11 14:58:55.133037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.674 [2024-12-11 14:58:55.133052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:71576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.674 [2024-12-11 14:58:55.133066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.674 [2024-12-11 14:58:55.133080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:71976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.674 [2024-12-11 14:58:55.133093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.674 [2024-12-11 14:58:55.133108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:71984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.674 [2024-12-11 14:58:55.133121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.674 [2024-12-11 14:58:55.133136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:71992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.674 [2024-12-11 14:58:55.133150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.674 [2024-12-11 14:58:55.133164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:72000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.674 [2024-12-11 14:58:55.133177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.674 [2024-12-11 14:58:55.133192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:72008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.674 [2024-12-11 14:58:55.133205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.674 [2024-12-11 14:58:55.133220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:72016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.674 [2024-12-11 14:58:55.133233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.674 [2024-12-11 14:58:55.133248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:72024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.674 [2024-12-11 14:58:55.133261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.674 [2024-12-11 14:58:55.133276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:72032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.674 [2024-12-11 14:58:55.133289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.674 [2024-12-11 14:58:55.133304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:71584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.674 [2024-12-11 14:58:55.133318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.674 [2024-12-11 14:58:55.133333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:71592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.675 [2024-12-11 14:58:55.133346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.675 [2024-12-11 14:58:55.133364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:72040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.675 [2024-12-11 14:58:55.133379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.675 [2024-12-11 14:58:55.133394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:72048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.675 [2024-12-11 14:58:55.133407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.675 [2024-12-11 14:58:55.133422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:72056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.675 [2024-12-11 14:58:55.133435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.675 [2024-12-11 14:58:55.133450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:72064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.675 [2024-12-11 14:58:55.133463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.675 [2024-12-11 14:58:55.133478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:72072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.675 [2024-12-11 14:58:55.133491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.675 [2024-12-11 14:58:55.133506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:72080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.675 [2024-12-11 14:58:55.133520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.675 [2024-12-11 14:58:55.133554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:72088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.675 [2024-12-11 14:58:55.133570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.675 [2024-12-11 14:58:55.133585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:72096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.675 [2024-12-11 14:58:55.133599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.675 [2024-12-11 14:58:55.133614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:72104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.675 [2024-12-11 14:58:55.133628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.675 [2024-12-11 14:58:55.133643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:72112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.675 [2024-12-11 14:58:55.133656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.675 [2024-12-11 14:58:55.133671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:72120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.675 [2024-12-11 14:58:55.133684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.675 [2024-12-11 14:58:55.133700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:72128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.675 [2024-12-11 14:58:55.133713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.675 [2024-12-11 14:58:55.133728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:72136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.675 [2024-12-11 14:58:55.133746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.675 [2024-12-11 14:58:55.133762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:72144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.675 [2024-12-11 14:58:55.133776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.675 [2024-12-11 14:58:55.133791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:72152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.675 [2024-12-11 14:58:55.133805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.675 [2024-12-11 14:58:55.133820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:72160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.675 [2024-12-11 14:58:55.133833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.675 [2024-12-11 14:58:55.133848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:72168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.675 [2024-12-11 14:58:55.133862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.675 [2024-12-11 14:58:55.133876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:72176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.675 [2024-12-11 14:58:55.133889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.675 [2024-12-11 14:58:55.133904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:72184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.675 [2024-12-11 14:58:55.133917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.675 [2024-12-11 14:58:55.133932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:72192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.675 [2024-12-11 14:58:55.133945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.675 [2024-12-11 14:58:55.133960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:72200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.675 [2024-12-11 14:58:55.133974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.675 [2024-12-11 14:58:55.133988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:72208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.675 [2024-12-11 14:58:55.134002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.675 [2024-12-11 14:58:55.134017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:72216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.675 [2024-12-11 14:58:55.134030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.675 [2024-12-11 14:58:55.134045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:72224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.675 [2024-12-11 14:58:55.134058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.675 [2024-12-11 14:58:55.134073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:72232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.675 [2024-12-11 14:58:55.134087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.675 [2024-12-11 14:58:55.134105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:72240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.675 [2024-12-11 14:58:55.134119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.675 [2024-12-11 14:58:55.134134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:72248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.675 [2024-12-11 14:58:55.134147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.675 [2024-12-11 14:58:55.134162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:72256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.675 [2024-12-11 14:58:55.134175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.675 [2024-12-11 14:58:55.134190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:72264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.675 [2024-12-11 14:58:55.134203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.675 [2024-12-11 14:58:55.134218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:72272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.675 [2024-12-11 14:58:55.134231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.675 [2024-12-11 14:58:55.134246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:72280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.675 [2024-12-11 14:58:55.134259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.675 [2024-12-11 14:58:55.134274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:72288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.675 [2024-12-11 14:58:55.134287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.675 [2024-12-11 14:58:55.134302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:72296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.675 [2024-12-11 14:58:55.134316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.675 [2024-12-11 14:58:55.134330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:72304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.675 [2024-12-11 14:58:55.134343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.675 [2024-12-11 14:58:55.134358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:72312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.675 [2024-12-11 14:58:55.134372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.675 [2024-12-11 14:58:55.134387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:72320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.675 [2024-12-11 14:58:55.134401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.675 [2024-12-11 14:58:55.134416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:72328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.675 [2024-12-11 14:58:55.134429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.675 [2024-12-11 14:58:55.134444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:72336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.675 [2024-12-11 14:58:55.134458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.675 [2024-12-11 14:58:55.134479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:72344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.675 [2024-12-11 14:58:55.134494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.675 [2024-12-11 14:58:55.134508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:72352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.675 [2024-12-11 14:58:55.134522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.675 [2024-12-11 14:58:55.134538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:72360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.676 [2024-12-11 14:58:55.134561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.676 [2024-12-11 14:58:55.134577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:72368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.676 [2024-12-11 14:58:55.134591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.676 [2024-12-11 14:58:55.134606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:72376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.676 [2024-12-11 14:58:55.134620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.676 [2024-12-11 14:58:55.134634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:72384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.676 [2024-12-11 14:58:55.134648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.676 [2024-12-11 14:58:55.134663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:72392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.676 [2024-12-11 14:58:55.134676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.676 [2024-12-11 14:58:55.134691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:72400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.676 [2024-12-11 14:58:55.134704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.676 [2024-12-11 14:58:55.134718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:72408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.676 [2024-12-11 14:58:55.134731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.676 [2024-12-11 14:58:55.134746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:72416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.676 [2024-12-11 14:58:55.134759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.676 [2024-12-11 14:58:55.134789] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:23.676 [2024-12-11 14:58:55.134806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72424 len:8 PRP1 0x0 PRP2 0x0 00:22:23.676 [2024-12-11 14:58:55.134819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.676 [2024-12-11 14:58:55.134837] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:23.676 [2024-12-11 14:58:55.134849] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:23.676 [2024-12-11 14:58:55.134859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72432 len:8 PRP1 0x0 PRP2 0x0 00:22:23.676 [2024-12-11 14:58:55.134875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.676 [2024-12-11 14:58:55.134889] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:23.676 [2024-12-11 14:58:55.134900] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:23.676 [2024-12-11 14:58:55.134910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72440 len:8 PRP1 0x0 PRP2 0x0 00:22:23.676 [2024-12-11 14:58:55.134922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.676 [2024-12-11 14:58:55.134934] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:23.676 [2024-12-11 14:58:55.134945] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:23.676 [2024-12-11 14:58:55.134956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72448 len:8 PRP1 0x0 PRP2 0x0 00:22:23.676 [2024-12-11 14:58:55.134968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.676 [2024-12-11 14:58:55.134980] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:23.676 [2024-12-11 14:58:55.134991] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:23.676 [2024-12-11 14:58:55.135001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72456 len:8 PRP1 0x0 PRP2 0x0 00:22:23.676 [2024-12-11 14:58:55.135013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.676 [2024-12-11 14:58:55.135026] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:23.676 [2024-12-11 14:58:55.135036] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:23.676 [2024-12-11 14:58:55.135047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72464 len:8 PRP1 0x0 PRP2 0x0 00:22:23.676 [2024-12-11 14:58:55.135059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.676 [2024-12-11 14:58:55.135072] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:23.676 [2024-12-11 14:58:55.135082] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:23.676 [2024-12-11 14:58:55.135093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72472 len:8 PRP1 0x0 PRP2 0x0 00:22:23.676 [2024-12-11 14:58:55.135104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.676 [2024-12-11 14:58:55.135117] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:23.676 [2024-12-11 14:58:55.135127] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:23.676 [2024-12-11 14:58:55.135138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72480 len:8 PRP1 0x0 PRP2 0x0 00:22:23.676 [2024-12-11 14:58:55.135150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.676 [2024-12-11 14:58:55.135163] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:23.676 [2024-12-11 14:58:55.135173] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:23.676 [2024-12-11 14:58:55.135183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72488 len:8 PRP1 0x0 PRP2 0x0 00:22:23.676 [2024-12-11 14:58:55.135195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.676 [2024-12-11 14:58:55.135208] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:23.676 [2024-12-11 14:58:55.135218] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:23.676 [2024-12-11 14:58:55.135232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72496 len:8 PRP1 0x0 PRP2 0x0 00:22:23.676 [2024-12-11 14:58:55.135247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.676 [2024-12-11 14:58:55.135261] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:23.676 [2024-12-11 14:58:55.135272] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:23.676 [2024-12-11 14:58:55.135283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72504 len:8 PRP1 0x0 PRP2 0x0 00:22:23.676 [2024-12-11 14:58:55.135296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.676 [2024-12-11 14:58:55.135309] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:23.676 [2024-12-11 14:58:55.135321] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:23.676 [2024-12-11 14:58:55.135332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72512 len:8 PRP1 0x0 PRP2 0x0 00:22:23.676 [2024-12-11 14:58:55.135344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.676 [2024-12-11 14:58:55.135357] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:23.676 [2024-12-11 14:58:55.135368] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:23.676 [2024-12-11 14:58:55.135379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72520 len:8 PRP1 0x0 PRP2 0x0 00:22:23.676 [2024-12-11 14:58:55.135391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.676 [2024-12-11 14:58:55.135404] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:23.676 [2024-12-11 14:58:55.135415] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:23.676 [2024-12-11 14:58:55.135427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72528 len:8 PRP1 0x0 PRP2 0x0 00:22:23.676 [2024-12-11 14:58:55.135439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.676 [2024-12-11 14:58:55.135452] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:23.676 [2024-12-11 14:58:55.135464] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:23.676 [2024-12-11 14:58:55.135475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72536 len:8 PRP1 0x0 PRP2 0x0 00:22:23.676 [2024-12-11 14:58:55.135487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.676 [2024-12-11 14:58:55.135500] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:23.676 [2024-12-11 14:58:55.135511] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:23.676 [2024-12-11 14:58:55.135522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71600 len:8 PRP1 0x0 PRP2 0x0 00:22:23.676 [2024-12-11 14:58:55.135535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.676 [2024-12-11 14:58:55.135556] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:23.676 [2024-12-11 14:58:55.135569] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:23.676 [2024-12-11 14:58:55.135580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71608 len:8 PRP1 0x0 PRP2 0x0 00:22:23.676 [2024-12-11 14:58:55.135602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.676 [2024-12-11 14:58:55.135619] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:23.676 [2024-12-11 14:58:55.135631] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:23.676 [2024-12-11 14:58:55.135641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71616 len:8 PRP1 0x0 PRP2 0x0 00:22:23.676 [2024-12-11 14:58:55.135654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.676 [2024-12-11 14:58:55.135668] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:23.676 [2024-12-11 14:58:55.135679] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:23.676 [2024-12-11 14:58:55.135689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71624 len:8 PRP1 0x0 PRP2 0x0 00:22:23.676 [2024-12-11 14:58:55.135702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.676 [2024-12-11 14:58:55.135715] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:23.676 [2024-12-11 14:58:55.135726] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:23.676 [2024-12-11 14:58:55.135737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71632 len:8 PRP1 0x0 PRP2 0x0 00:22:23.676 [2024-12-11 14:58:55.135749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.677 [2024-12-11 14:58:55.135762] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:23.677 [2024-12-11 14:58:55.135773] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:23.677 [2024-12-11 14:58:55.135784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71640 len:8 PRP1 0x0 PRP2 0x0 00:22:23.677 [2024-12-11 14:58:55.135796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.677 [2024-12-11 14:58:55.135809] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:23.677 [2024-12-11 14:58:55.135820] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:23.677 [2024-12-11 14:58:55.135831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71648 len:8 PRP1 0x0 PRP2 0x0 00:22:23.677 [2024-12-11 14:58:55.135844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.677 [2024-12-11 14:58:55.135927] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:22:23.677 [2024-12-11 14:58:55.135966] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.677 [2024-12-11 14:58:55.135984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.677 [2024-12-11 14:58:55.136007] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.677 [2024-12-11 14:58:55.136021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.677 [2024-12-11 14:58:55.136034] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.677 [2024-12-11 14:58:55.136047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.677 [2024-12-11 14:58:55.136060] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.677 [2024-12-11 14:58:55.136073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.677 [2024-12-11 14:58:55.136090] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:22:23.677 [2024-12-11 14:58:55.136148] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x142d230 (9): Bad file descriptor 00:22:23.677 [2024-12-11 14:58:55.139414] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:22:23.677 [2024-12-11 14:58:55.248655] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:22:23.677 8288.40 IOPS, 32.38 MiB/s [2024-12-11T13:59:06.450Z] 8300.50 IOPS, 32.42 MiB/s [2024-12-11T13:59:06.450Z] 8297.57 IOPS, 32.41 MiB/s [2024-12-11T13:59:06.450Z] 8315.00 IOPS, 32.48 MiB/s [2024-12-11T13:59:06.450Z] 8323.44 IOPS, 32.51 MiB/s [2024-12-11T13:59:06.450Z] [2024-12-11 14:58:59.729338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.677 [2024-12-11 14:58:59.729380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.677 [2024-12-11 14:58:59.729398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.677 [2024-12-11 14:58:59.729412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.677 [2024-12-11 14:58:59.729426] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.677 [2024-12-11 14:58:59.729439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.677 [2024-12-11 14:58:59.729452] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.677 [2024-12-11 14:58:59.729465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.677 [2024-12-11 14:58:59.729478] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142d230 is same with the state(6) to be set 00:22:23.677 [2024-12-11 14:58:59.729571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:12560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.677 [2024-12-11 14:58:59.729593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.677 [2024-12-11 14:58:59.729617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:12568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.677 [2024-12-11 14:58:59.729632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.677 [2024-12-11 14:58:59.729648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:12576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.677 [2024-12-11 14:58:59.729662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.677 [2024-12-11 14:58:59.729677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:12584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.677 [2024-12-11 14:58:59.729691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.677 [2024-12-11 14:58:59.729706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:12592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.677 [2024-12-11 14:58:59.729720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.677 [2024-12-11 14:58:59.729735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:12600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.677 [2024-12-11 14:58:59.729749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.677 [2024-12-11 14:58:59.729769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:12608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.677 [2024-12-11 14:58:59.729784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.677 [2024-12-11 14:58:59.729799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:12616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.677 [2024-12-11 14:58:59.729812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.677 [2024-12-11 14:58:59.729834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:12624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.677 [2024-12-11 14:58:59.729862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.677 [2024-12-11 14:58:59.729877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:12632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.677 [2024-12-11 14:58:59.729890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.677 [2024-12-11 14:58:59.729904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:12640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.677 [2024-12-11 14:58:59.729918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.677 [2024-12-11 14:58:59.729932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:12648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.677 [2024-12-11 14:58:59.729945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.677 [2024-12-11 14:58:59.729960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:12656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.677 [2024-12-11 14:58:59.729973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.677 [2024-12-11 14:58:59.729987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.677 [2024-12-11 14:58:59.730000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.677 [2024-12-11 14:58:59.730014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:12672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.677 [2024-12-11 14:58:59.730027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.677 [2024-12-11 14:58:59.730041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:12680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.677 [2024-12-11 14:58:59.730054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.677 [2024-12-11 14:58:59.730068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:12688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.677 [2024-12-11 14:58:59.730083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.677 [2024-12-11 14:58:59.730097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:12696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.677 [2024-12-11 14:58:59.730111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.677 [2024-12-11 14:58:59.730125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.677 [2024-12-11 14:58:59.730142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.677 [2024-12-11 14:58:59.730158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:12120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.677 [2024-12-11 14:58:59.730171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.677 [2024-12-11 14:58:59.730185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:12128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.677 [2024-12-11 14:58:59.730198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.677 [2024-12-11 14:58:59.730213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:12136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.677 [2024-12-11 14:58:59.730226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.677 [2024-12-11 14:58:59.730240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:12144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.677 [2024-12-11 14:58:59.730253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.677 [2024-12-11 14:58:59.730268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:12152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.677 [2024-12-11 14:58:59.730281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.677 [2024-12-11 14:58:59.730296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:12160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.677 [2024-12-11 14:58:59.730310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.677 [2024-12-11 14:58:59.730324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.678 [2024-12-11 14:58:59.730337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.678 [2024-12-11 14:58:59.730352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.678 [2024-12-11 14:58:59.730365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.678 [2024-12-11 14:58:59.730380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.678 [2024-12-11 14:58:59.730393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.678 [2024-12-11 14:58:59.730407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:12192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.678 [2024-12-11 14:58:59.730420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.678 [2024-12-11 14:58:59.730434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:12200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.678 [2024-12-11 14:58:59.730447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.678 [2024-12-11 14:58:59.730461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:12208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.678 [2024-12-11 14:58:59.730474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.678 [2024-12-11 14:58:59.730488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:12216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.678 [2024-12-11 14:58:59.730505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.678 [2024-12-11 14:58:59.730520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:12224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.678 [2024-12-11 14:58:59.730551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.678 [2024-12-11 14:58:59.730584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:12232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.678 [2024-12-11 14:58:59.730598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.678 [2024-12-11 14:58:59.730613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.678 [2024-12-11 14:58:59.730627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.678 [2024-12-11 14:58:59.730642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:12248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.678 [2024-12-11 14:58:59.730655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.678 [2024-12-11 14:58:59.730670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:12256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.678 [2024-12-11 14:58:59.730684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.678 [2024-12-11 14:58:59.730699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:12264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.678 [2024-12-11 14:58:59.730712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.678 [2024-12-11 14:58:59.730727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:12272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.678 [2024-12-11 14:58:59.730740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.678 [2024-12-11 14:58:59.730754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:12280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.678 [2024-12-11 14:58:59.730768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.678 [2024-12-11 14:58:59.730782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:12288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.678 [2024-12-11 14:58:59.730795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.678 [2024-12-11 14:58:59.730810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:12296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.678 [2024-12-11 14:58:59.730824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.678 [2024-12-11 14:58:59.730839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:12304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.678 [2024-12-11 14:58:59.730852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.678 [2024-12-11 14:58:59.730882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:12312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.678 [2024-12-11 14:58:59.730895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.678 [2024-12-11 14:58:59.730913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:12320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.678 [2024-12-11 14:58:59.730926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.678 [2024-12-11 14:58:59.730940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:12328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.678 [2024-12-11 14:58:59.730953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.678 [2024-12-11 14:58:59.730967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:12336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.678 [2024-12-11 14:58:59.730979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.678 [2024-12-11 14:58:59.730993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:12344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.678 [2024-12-11 14:58:59.731006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.678 [2024-12-11 14:58:59.731020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:12352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.678 [2024-12-11 14:58:59.731033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.678 [2024-12-11 14:58:59.731047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:12360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.678 [2024-12-11 14:58:59.731060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.678 [2024-12-11 14:58:59.731075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:12704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.678 [2024-12-11 14:58:59.731088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.678 [2024-12-11 14:58:59.731102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:12712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.678 [2024-12-11 14:58:59.731115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.678 [2024-12-11 14:58:59.731129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:12720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.678 [2024-12-11 14:58:59.731142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.678 [2024-12-11 14:58:59.731156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:12728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.678 [2024-12-11 14:58:59.731170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.678 [2024-12-11 14:58:59.731184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:12736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.678 [2024-12-11 14:58:59.731197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.678 [2024-12-11 14:58:59.731211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:12744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.678 [2024-12-11 14:58:59.731224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.678 [2024-12-11 14:58:59.731238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:12752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.678 [2024-12-11 14:58:59.731255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.678 [2024-12-11 14:58:59.731269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:12760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.678 [2024-12-11 14:58:59.731282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.678 [2024-12-11 14:58:59.731296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:12768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.678 [2024-12-11 14:58:59.731309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.678 [2024-12-11 14:58:59.731323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:12776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.678 [2024-12-11 14:58:59.731336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.678 [2024-12-11 14:58:59.731349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.678 [2024-12-11 14:58:59.731362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.678 [2024-12-11 14:58:59.731376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:12792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.679 [2024-12-11 14:58:59.731389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.679 [2024-12-11 14:58:59.731402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:12800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.679 [2024-12-11 14:58:59.731415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.679 [2024-12-11 14:58:59.731429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:12808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.679 [2024-12-11 14:58:59.731442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.679 [2024-12-11 14:58:59.731455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:12816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.679 [2024-12-11 14:58:59.731468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.679 [2024-12-11 14:58:59.731482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:12824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.679 [2024-12-11 14:58:59.731496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.679 [2024-12-11 14:58:59.731510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.679 [2024-12-11 14:58:59.731522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.679 [2024-12-11 14:58:59.731536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.679 [2024-12-11 14:58:59.731575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.679 [2024-12-11 14:58:59.731592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:12848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.679 [2024-12-11 14:58:59.731606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.679 [2024-12-11 14:58:59.731625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:12856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.679 [2024-12-11 14:58:59.731639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.679 [2024-12-11 14:58:59.731654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:12864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.679 [2024-12-11 14:58:59.731667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.679 [2024-12-11 14:58:59.731682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:12872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.679 [2024-12-11 14:58:59.731695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.679 [2024-12-11 14:58:59.731710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:12880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.679 [2024-12-11 14:58:59.731723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.679 [2024-12-11 14:58:59.731739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:12888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.679 [2024-12-11 14:58:59.731753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.679 [2024-12-11 14:58:59.731768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:12896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.679 [2024-12-11 14:58:59.731782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.679 [2024-12-11 14:58:59.731798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:12904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.679 [2024-12-11 14:58:59.731811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.679 [2024-12-11 14:58:59.731826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:12912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.679 [2024-12-11 14:58:59.731847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.679 [2024-12-11 14:58:59.731877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:12920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.679 [2024-12-11 14:58:59.731891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.679 [2024-12-11 14:58:59.731905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:12928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.679 [2024-12-11 14:58:59.731918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.679 [2024-12-11 14:58:59.731932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:12936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.679 [2024-12-11 14:58:59.731946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.679 [2024-12-11 14:58:59.731960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:12368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.679 [2024-12-11 14:58:59.731974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.679 [2024-12-11 14:58:59.731989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:12376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.679 [2024-12-11 14:58:59.732002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.679 [2024-12-11 14:58:59.732021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:12384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.679 [2024-12-11 14:58:59.732035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.679 [2024-12-11 14:58:59.732050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:12392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.679 [2024-12-11 14:58:59.732063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.679 [2024-12-11 14:58:59.732077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:12400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.679 [2024-12-11 14:58:59.732091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.679 [2024-12-11 14:58:59.732105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:12408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.679 [2024-12-11 14:58:59.732118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.679 [2024-12-11 14:58:59.732133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.679 [2024-12-11 14:58:59.732146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.679 [2024-12-11 14:58:59.732161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:12424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.679 [2024-12-11 14:58:59.732175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.679 [2024-12-11 14:58:59.732189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:12432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.679 [2024-12-11 14:58:59.732203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.679 [2024-12-11 14:58:59.732217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.679 [2024-12-11 14:58:59.732231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.679 [2024-12-11 14:58:59.732245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:12448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.679 [2024-12-11 14:58:59.732259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.679 [2024-12-11 14:58:59.732273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:12456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.679 [2024-12-11 14:58:59.732287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.679 [2024-12-11 14:58:59.732302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:12464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.679 [2024-12-11 14:58:59.732315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.679 [2024-12-11 14:58:59.732330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:12472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.679 [2024-12-11 14:58:59.732343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.679 [2024-12-11 14:58:59.732358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:12480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.679 [2024-12-11 14:58:59.732374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.679 [2024-12-11 14:58:59.732390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:12488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.679 [2024-12-11 14:58:59.732403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.679 [2024-12-11 14:58:59.732418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:12496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.679 [2024-12-11 14:58:59.732432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.679 [2024-12-11 14:58:59.732446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.679 [2024-12-11 14:58:59.732460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.679 [2024-12-11 14:58:59.732475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.679 [2024-12-11 14:58:59.732488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.679 [2024-12-11 14:58:59.732502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:12520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.679 [2024-12-11 14:58:59.732516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.679 [2024-12-11 14:58:59.732584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:12528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.679 [2024-12-11 14:58:59.732602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.679 [2024-12-11 14:58:59.732617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:12536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.679 [2024-12-11 14:58:59.732631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.679 [2024-12-11 14:58:59.732646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:12544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.679 [2024-12-11 14:58:59.732660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.680 [2024-12-11 14:58:59.732675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:12552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.680 [2024-12-11 14:58:59.732689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.680 [2024-12-11 14:58:59.732704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:12944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.680 [2024-12-11 14:58:59.732717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.680 [2024-12-11 14:58:59.732732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:12952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.680 [2024-12-11 14:58:59.732746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.680 [2024-12-11 14:58:59.732761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:12960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.680 [2024-12-11 14:58:59.732775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.680 [2024-12-11 14:58:59.732793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:12968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.680 [2024-12-11 14:58:59.732808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.680 [2024-12-11 14:58:59.732823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:12976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.680 [2024-12-11 14:58:59.732837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.680 [2024-12-11 14:58:59.732866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:12984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.680 [2024-12-11 14:58:59.732880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.680 [2024-12-11 14:58:59.732895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:12992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.680 [2024-12-11 14:58:59.732908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.680 [2024-12-11 14:58:59.732922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:13000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.680 [2024-12-11 14:58:59.732936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.680 [2024-12-11 14:58:59.732951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:13008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.680 [2024-12-11 14:58:59.732965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.680 [2024-12-11 14:58:59.732979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:13016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.680 [2024-12-11 14:58:59.732992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.680 [2024-12-11 14:58:59.733022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:13024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.680 [2024-12-11 14:58:59.733035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.680 [2024-12-11 14:58:59.733050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:13032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.680 [2024-12-11 14:58:59.733063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.680 [2024-12-11 14:58:59.733078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:13040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.680 [2024-12-11 14:58:59.733091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.680 [2024-12-11 14:58:59.733105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:13048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.680 [2024-12-11 14:58:59.733118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.680 [2024-12-11 14:58:59.733133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:13056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.680 [2024-12-11 14:58:59.733146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.680 [2024-12-11 14:58:59.733161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:13064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.680 [2024-12-11 14:58:59.733178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.680 [2024-12-11 14:58:59.733193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:13072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.680 [2024-12-11 14:58:59.733207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.680 [2024-12-11 14:58:59.733222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:13080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.680 [2024-12-11 14:58:59.733235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.680 [2024-12-11 14:58:59.733250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:13088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.680 [2024-12-11 14:58:59.733263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.680 [2024-12-11 14:58:59.733278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:13096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.680 [2024-12-11 14:58:59.733291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.680 [2024-12-11 14:58:59.733305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:13104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.680 [2024-12-11 14:58:59.733318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.680 [2024-12-11 14:58:59.733333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:13112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.680 [2024-12-11 14:58:59.733346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.680 [2024-12-11 14:58:59.733361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:13120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:23.680 [2024-12-11 14:58:59.733374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.680 [2024-12-11 14:58:59.733402] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:23.680 [2024-12-11 14:58:59.733416] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:23.680 [2024-12-11 14:58:59.733428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13128 len:8 PRP1 0x0 PRP2 0x0 00:22:23.680 [2024-12-11 14:58:59.733440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.680 [2024-12-11 14:58:59.733504] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:22:23.680 [2024-12-11 14:58:59.733524] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:22:23.680 [2024-12-11 14:58:59.736833] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:22:23.680 [2024-12-11 14:58:59.736874] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x142d230 (9): Bad file descriptor 00:22:23.680 [2024-12-11 14:58:59.803896] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:22:23.680 8276.90 IOPS, 32.33 MiB/s [2024-12-11T13:59:06.453Z] 8309.45 IOPS, 32.46 MiB/s [2024-12-11T13:59:06.453Z] 8326.50 IOPS, 32.53 MiB/s [2024-12-11T13:59:06.453Z] 8351.38 IOPS, 32.62 MiB/s [2024-12-11T13:59:06.453Z] 8375.00 IOPS, 32.71 MiB/s 00:22:23.680 Latency(us) 00:22:23.680 [2024-12-11T13:59:06.453Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:23.680 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:23.680 Verification LBA range: start 0x0 length 0x4000 00:22:23.680 NVMe0n1 : 15.01 8381.82 32.74 558.49 0.00 14288.33 534.00 19806.44 00:22:23.680 [2024-12-11T13:59:06.453Z] =================================================================================================================== 00:22:23.680 [2024-12-11T13:59:06.453Z] Total : 8381.82 32.74 558.49 0.00 14288.33 534.00 19806.44 00:22:23.680 Received shutdown signal, test time was about 15.000000 seconds 00:22:23.680 00:22:23.680 Latency(us) 00:22:23.680 [2024-12-11T13:59:06.453Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:23.680 [2024-12-11T13:59:06.453Z] =================================================================================================================== 00:22:23.680 [2024-12-11T13:59:06.453Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:23.680 14:59:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:22:23.680 14:59:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:22:23.680 14:59:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:22:23.680 14:59:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:22:23.680 14:59:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=742664 00:22:23.680 14:59:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 742664 /var/tmp/bdevperf.sock 00:22:23.680 14:59:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 742664 ']' 00:22:23.680 14:59:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:23.680 14:59:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:23.680 14:59:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:23.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:23.680 14:59:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:23.680 14:59:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:23.680 14:59:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:23.680 14:59:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:22:23.680 14:59:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:23.680 [2024-12-11 14:59:06.186364] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:23.681 14:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:23.938 [2024-12-11 14:59:06.503255] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:23.938 14:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:24.196 NVMe0n1 00:22:24.196 14:59:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:24.761 00:22:24.761 14:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:25.019 00:22:25.019 14:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:25.019 14:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:22:25.276 14:59:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:25.534 14:59:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:22:28.811 14:59:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:28.811 14:59:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:22:28.811 14:59:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=743328 00:22:28.811 14:59:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:28.811 14:59:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 743328 00:22:30.185 { 00:22:30.185 "results": [ 00:22:30.185 { 00:22:30.185 "job": "NVMe0n1", 00:22:30.185 "core_mask": "0x1", 00:22:30.185 "workload": "verify", 00:22:30.185 "status": "finished", 00:22:30.185 "verify_range": { 00:22:30.185 "start": 0, 00:22:30.185 "length": 16384 00:22:30.185 }, 00:22:30.185 "queue_depth": 128, 00:22:30.185 "io_size": 4096, 00:22:30.185 "runtime": 1.047313, 00:22:30.185 "iops": 8214.354257036817, 00:22:30.185 "mibps": 32.08732131655007, 00:22:30.185 "io_failed": 0, 00:22:30.185 "io_timeout": 0, 00:22:30.185 "avg_latency_us": 14941.06730382597, 00:22:30.185 "min_latency_us": 3276.8, 00:22:30.185 "max_latency_us": 42137.22074074074 00:22:30.185 } 00:22:30.185 ], 00:22:30.185 "core_count": 1 00:22:30.185 } 00:22:30.185 14:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:30.185 [2024-12-11 14:59:05.706561] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:22:30.185 [2024-12-11 14:59:05.706650] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid742664 ] 00:22:30.185 [2024-12-11 14:59:05.774365] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:30.185 [2024-12-11 14:59:05.830317] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:22:30.185 [2024-12-11 14:59:08.173755] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:22:30.185 [2024-12-11 14:59:08.173836] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.185 [2024-12-11 14:59:08.173860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.185 [2024-12-11 14:59:08.173877] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.185 [2024-12-11 14:59:08.173891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.186 [2024-12-11 14:59:08.173905] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.186 [2024-12-11 14:59:08.173920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.186 [2024-12-11 14:59:08.173946] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.186 [2024-12-11 14:59:08.173965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.186 [2024-12-11 14:59:08.173987] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:22:30.186 [2024-12-11 14:59:08.174036] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:22:30.186 [2024-12-11 14:59:08.174068] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cd3230 (9): Bad file descriptor 00:22:30.186 [2024-12-11 14:59:08.178524] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:22:30.186 Running I/O for 1 seconds... 00:22:30.186 8475.00 IOPS, 33.11 MiB/s 00:22:30.186 Latency(us) 00:22:30.186 [2024-12-11T13:59:12.959Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:30.186 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:30.186 Verification LBA range: start 0x0 length 0x4000 00:22:30.186 NVMe0n1 : 1.05 8214.35 32.09 0.00 0.00 14941.07 3276.80 42137.22 00:22:30.186 [2024-12-11T13:59:12.959Z] =================================================================================================================== 00:22:30.186 [2024-12-11T13:59:12.959Z] Total : 8214.35 32.09 0.00 0.00 14941.07 3276.80 42137.22 00:22:30.186 14:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:30.186 14:59:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:22:30.444 14:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:30.701 14:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:30.701 14:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:22:30.959 14:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:31.216 14:59:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:22:34.553 14:59:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:34.553 14:59:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:22:34.553 14:59:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 742664 00:22:34.553 14:59:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 742664 ']' 00:22:34.553 14:59:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 742664 00:22:34.553 14:59:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:22:34.553 14:59:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:34.553 14:59:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 742664 00:22:34.553 14:59:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:34.553 14:59:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:34.553 14:59:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 742664' 00:22:34.553 killing process with pid 742664 00:22:34.553 14:59:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 742664 00:22:34.553 14:59:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 742664 00:22:34.811 14:59:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:22:34.811 14:59:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:35.068 14:59:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:22:35.068 14:59:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:35.068 14:59:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:22:35.068 14:59:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:35.068 14:59:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:22:35.068 14:59:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:35.068 14:59:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:22:35.068 14:59:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:35.068 14:59:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:35.068 rmmod nvme_tcp 00:22:35.068 rmmod nvme_fabrics 00:22:35.068 rmmod nvme_keyring 00:22:35.068 14:59:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:35.068 14:59:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:22:35.068 14:59:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:22:35.068 14:59:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 740391 ']' 00:22:35.068 14:59:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 740391 00:22:35.068 14:59:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 740391 ']' 00:22:35.068 14:59:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 740391 00:22:35.068 14:59:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:22:35.068 14:59:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:35.068 14:59:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 740391 00:22:35.068 14:59:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:35.068 14:59:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:35.068 14:59:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 740391' 00:22:35.068 killing process with pid 740391 00:22:35.068 14:59:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 740391 00:22:35.068 14:59:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 740391 00:22:35.327 14:59:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:35.327 14:59:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:35.327 14:59:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:35.327 14:59:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:22:35.327 14:59:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:22:35.327 14:59:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:35.327 14:59:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:22:35.327 14:59:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:35.327 14:59:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:35.327 14:59:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:35.327 14:59:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:35.327 14:59:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:37.860 14:59:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:37.860 00:22:37.860 real 0m35.895s 00:22:37.860 user 2m6.562s 00:22:37.860 sys 0m6.074s 00:22:37.860 14:59:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:37.860 14:59:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:37.860 ************************************ 00:22:37.860 END TEST nvmf_failover 00:22:37.860 ************************************ 00:22:37.860 14:59:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:37.860 14:59:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:37.860 14:59:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:37.860 14:59:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:37.860 ************************************ 00:22:37.860 START TEST nvmf_host_discovery 00:22:37.860 ************************************ 00:22:37.860 14:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:37.860 * Looking for test storage... 00:22:37.860 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:37.860 14:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:37.860 14:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:22:37.860 14:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:37.860 14:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:37.860 14:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:37.860 14:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:37.860 14:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:37.860 14:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:22:37.860 14:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:22:37.860 14:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:22:37.860 14:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:22:37.860 14:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:22:37.860 14:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:22:37.860 14:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:22:37.860 14:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:37.860 14:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:22:37.860 14:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:22:37.860 14:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:37.860 14:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:37.860 14:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:22:37.860 14:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:22:37.860 14:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:37.860 14:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:22:37.860 14:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:22:37.860 14:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:22:37.860 14:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:22:37.860 14:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:37.860 14:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:22:37.860 14:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:22:37.860 14:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:37.860 14:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:37.860 14:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:22:37.860 14:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:37.860 14:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:37.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:37.860 --rc genhtml_branch_coverage=1 00:22:37.860 --rc genhtml_function_coverage=1 00:22:37.860 --rc genhtml_legend=1 00:22:37.860 --rc geninfo_all_blocks=1 00:22:37.860 --rc geninfo_unexecuted_blocks=1 00:22:37.860 00:22:37.860 ' 00:22:37.860 14:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:37.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:37.860 --rc genhtml_branch_coverage=1 00:22:37.860 --rc genhtml_function_coverage=1 00:22:37.860 --rc genhtml_legend=1 00:22:37.860 --rc geninfo_all_blocks=1 00:22:37.860 --rc geninfo_unexecuted_blocks=1 00:22:37.860 00:22:37.860 ' 00:22:37.861 14:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:37.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:37.861 --rc genhtml_branch_coverage=1 00:22:37.861 --rc genhtml_function_coverage=1 00:22:37.861 --rc genhtml_legend=1 00:22:37.861 --rc geninfo_all_blocks=1 00:22:37.861 --rc geninfo_unexecuted_blocks=1 00:22:37.861 00:22:37.861 ' 00:22:37.861 14:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:37.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:37.861 --rc genhtml_branch_coverage=1 00:22:37.861 --rc genhtml_function_coverage=1 00:22:37.861 --rc genhtml_legend=1 00:22:37.861 --rc geninfo_all_blocks=1 00:22:37.861 --rc geninfo_unexecuted_blocks=1 00:22:37.861 00:22:37.861 ' 00:22:37.861 14:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:37.861 14:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:22:37.861 14:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:37.861 14:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:37.861 14:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:37.861 14:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:37.861 14:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:37.861 14:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:37.861 14:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:37.861 14:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:37.861 14:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:37.861 14:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:37.861 14:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:37.861 14:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:37.861 14:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:37.861 14:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:37.861 14:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:37.861 14:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:37.861 14:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:37.861 14:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:22:37.861 14:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:37.861 14:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:37.861 14:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:37.861 14:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.861 14:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.861 14:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.861 14:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:22:37.861 14:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.861 14:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:22:37.861 14:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:37.861 14:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:37.861 14:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:37.861 14:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:37.861 14:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:37.861 14:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:37.861 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:37.861 14:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:37.861 14:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:37.861 14:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:37.861 14:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:22:37.861 14:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:22:37.861 14:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:22:37.861 14:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:22:37.861 14:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:22:37.861 14:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:22:37.861 14:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:22:37.861 14:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:37.861 14:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:37.861 14:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:37.861 14:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:37.861 14:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:37.861 14:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:37.861 14:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:37.861 14:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:37.861 14:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:37.861 14:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:37.861 14:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:22:37.861 14:59:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:39.767 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:39.767 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:22:39.767 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:39.767 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:39.767 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:39.767 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:39.767 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:39.767 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:22:39.767 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:39.767 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:22:39.767 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:22:39.767 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:22:39.767 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:22:39.767 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:22:39.767 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:22:39.767 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:39.767 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:39.767 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:39.767 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:39.767 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:39.767 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:39.767 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:39.767 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:39.767 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:39.767 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:39.767 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:39.767 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:39.767 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:39.767 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:39.767 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:39.767 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:39.767 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:39.767 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:39.767 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:39.767 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:39.767 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:39.767 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:39.767 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:39.767 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:39.767 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:39.767 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:39.767 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:39.767 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:39.767 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:39.767 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:39.767 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:39.767 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:39.767 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:39.767 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:39.767 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:39.767 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:39.767 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:39.767 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:39.767 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:39.767 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:39.767 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:39.767 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:39.767 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:39.767 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:39.767 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:39.767 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:39.767 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:39.767 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:39.767 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:39.767 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:39.767 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:39.767 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:39.767 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:39.767 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:39.767 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:39.767 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:39.767 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:39.767 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:39.767 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:22:39.767 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:39.767 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:39.767 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:39.767 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:39.767 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:39.767 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:39.767 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:39.767 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:39.767 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:39.767 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:39.767 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:39.767 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:39.767 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:39.767 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:39.768 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:39.768 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:39.768 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:39.768 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:40.026 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:40.026 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:40.026 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:40.026 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:40.026 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:40.026 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:40.026 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:40.026 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:40.026 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:40.026 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.297 ms 00:22:40.026 00:22:40.026 --- 10.0.0.2 ping statistics --- 00:22:40.026 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:40.026 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:22:40.026 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:40.026 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:40.026 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:22:40.026 00:22:40.026 --- 10.0.0.1 ping statistics --- 00:22:40.026 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:40.026 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:22:40.026 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:40.026 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:22:40.026 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:40.026 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:40.026 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:40.026 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:40.026 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:40.026 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:40.026 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:40.026 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:22:40.026 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:40.026 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:40.026 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:40.026 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=746067 00:22:40.026 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:40.026 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 746067 00:22:40.026 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 746067 ']' 00:22:40.026 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:40.026 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:40.026 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:40.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:40.026 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:40.026 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:40.026 [2024-12-11 14:59:22.716798] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:22:40.026 [2024-12-11 14:59:22.716892] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:40.026 [2024-12-11 14:59:22.789193] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:40.285 [2024-12-11 14:59:22.845067] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:40.285 [2024-12-11 14:59:22.845136] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:40.285 [2024-12-11 14:59:22.845149] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:40.285 [2024-12-11 14:59:22.845175] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:40.285 [2024-12-11 14:59:22.845185] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:40.285 [2024-12-11 14:59:22.845801] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:22:40.285 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:40.285 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:22:40.285 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:40.285 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:40.285 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:40.285 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:40.285 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:40.285 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.285 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:40.285 [2024-12-11 14:59:22.995179] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:40.285 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.285 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:22:40.285 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.285 14:59:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:40.285 [2024-12-11 14:59:23.003399] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:40.285 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.285 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:22:40.285 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.285 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:40.285 null0 00:22:40.285 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.285 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:22:40.285 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.285 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:40.285 null1 00:22:40.285 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.285 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:22:40.285 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.285 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:40.285 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.285 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=746094 00:22:40.285 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 746094 /tmp/host.sock 00:22:40.285 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:22:40.285 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 746094 ']' 00:22:40.285 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:22:40.285 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:40.285 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:40.285 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:40.285 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:40.285 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:40.543 [2024-12-11 14:59:23.080792] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:22:40.543 [2024-12-11 14:59:23.080888] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid746094 ] 00:22:40.543 [2024-12-11 14:59:23.147258] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:40.543 [2024-12-11 14:59:23.205833] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:22:40.801 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:40.801 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:22:40.801 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:40.801 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:22:40.801 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.801 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:40.801 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.801 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:22:40.802 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.802 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:40.802 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.802 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:22:40.802 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:22:40.802 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:40.802 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.802 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:40.802 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:40.802 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:40.802 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:40.802 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.802 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:22:40.802 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:22:40.802 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:40.802 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:40.802 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.802 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:40.802 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:40.802 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:40.802 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.802 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:22:40.802 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:22:40.802 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.802 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:40.802 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.802 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:22:40.802 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:40.802 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.802 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:40.802 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:40.802 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:40.802 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:40.802 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.802 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:22:40.802 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:22:40.802 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:40.802 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.802 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:40.802 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:40.802 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:40.802 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:40.802 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.802 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:22:40.802 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:22:40.802 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.802 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:40.802 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.802 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:22:40.802 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:40.802 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.802 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:40.802 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:40.802 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:40.802 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:40.802 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.802 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:22:40.802 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:22:40.802 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:40.802 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:40.802 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.802 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:40.802 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:40.802 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:40.802 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.060 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:22:41.060 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:41.060 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.060 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:41.060 [2024-12-11 14:59:23.601026] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:41.060 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.060 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:22:41.060 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:41.060 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.060 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:41.060 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:41.060 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:41.060 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:41.060 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.060 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:22:41.060 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:22:41.060 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:41.060 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:41.060 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.060 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:41.060 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:41.060 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:41.060 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.060 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:22:41.060 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:22:41.060 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:41.060 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:41.060 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:41.060 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:41.060 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:41.060 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:41.061 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:41.061 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:41.061 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:41.061 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.061 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:41.061 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.061 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:41.061 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:22:41.061 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:41.061 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:41.061 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:22:41.061 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.061 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:41.061 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.061 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:41.061 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:41.061 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:41.061 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:41.061 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:41.061 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:22:41.061 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:41.061 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:41.061 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.061 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:41.061 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:41.061 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:41.061 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.061 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:22:41.061 14:59:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:22:41.626 [2024-12-11 14:59:24.342362] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:41.626 [2024-12-11 14:59:24.342395] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:41.626 [2024-12-11 14:59:24.342419] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:41.884 [2024-12-11 14:59:24.429709] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:22:41.884 [2024-12-11 14:59:24.611880] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:22:41.884 [2024-12-11 14:59:24.612953] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1f8ab60:1 started. 00:22:41.884 [2024-12-11 14:59:24.614729] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:41.884 [2024-12-11 14:59:24.614753] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:41.884 [2024-12-11 14:59:24.621127] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1f8ab60 was disconnected and freed. delete nvme_qpair. 00:22:42.143 14:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:42.143 14:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:42.143 14:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:22:42.143 14:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:42.143 14:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:42.143 14:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.143 14:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:42.143 14:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:42.143 14:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:42.143 14:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.143 14:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:42.143 14:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:42.143 14:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:22:42.143 14:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:22:42.143 14:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:42.143 14:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:42.143 14:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:22:42.143 14:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:22:42.143 14:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:42.143 14:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:42.143 14:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.143 14:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:42.143 14:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:42.143 14:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:42.143 14:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.143 14:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:22:42.143 14:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:42.143 14:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:22:42.143 14:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:22:42.143 14:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:42.143 14:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:42.143 14:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:22:42.143 14:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:22:42.143 14:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:42.143 14:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.143 14:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:42.143 14:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:42.143 14:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:42.143 14:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:42.143 14:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.143 14:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:22:42.143 14:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:42.143 14:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:22:42.143 14:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:22:42.143 14:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:42.143 14:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:42.143 14:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:42.143 14:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:42.143 14:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:42.143 14:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:42.143 14:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:42.143 14:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.143 14:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:42.143 14:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:42.143 14:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.402 14:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:22:42.402 14:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:22:42.402 14:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:42.402 14:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:42.402 14:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:22:42.402 14:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.402 14:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:42.402 [2024-12-11 14:59:24.945240] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1f8b190:1 started. 00:22:42.402 14:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.402 14:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:42.402 14:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:42.402 14:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:42.402 14:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:42.402 14:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:42.402 14:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:22:42.402 14:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:42.402 14:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:42.402 14:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.402 14:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:42.402 14:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:42.402 14:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:42.402 14:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.402 14:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:42.402 14:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:42.402 14:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:22:42.402 14:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:22:42.402 14:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:42.402 14:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:42.402 14:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:42.402 14:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:42.402 14:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:42.402 14:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:42.402 14:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:22:42.402 14:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:42.402 14:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.402 14:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:42.402 [2024-12-11 14:59:24.993197] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1f8b190 was disconnected and freed. delete nvme_qpair. 00:22:42.402 14:59:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.402 14:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:22:42.402 14:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:42.403 14:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:42.403 14:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:42.403 14:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:22:42.403 14:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.403 14:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:42.403 [2024-12-11 14:59:25.033448] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:42.403 [2024-12-11 14:59:25.034706] bdev_nvme.c:7498:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:42.403 [2024-12-11 14:59:25.034739] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:42.403 14:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.403 14:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:42.403 14:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:42.403 14:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:42.403 14:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:42.403 14:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:42.403 14:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:22:42.403 14:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:42.403 14:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:42.403 14:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.403 14:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:42.403 14:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:42.403 14:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:42.403 14:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.403 14:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:42.403 14:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:42.403 14:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:42.403 14:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:42.403 14:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:42.403 14:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:42.403 14:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:42.403 14:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:22:42.403 14:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:42.403 14:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.403 14:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:42.403 14:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:42.403 14:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:42.403 14:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:42.403 14:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.403 14:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:42.403 14:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:42.403 14:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:22:42.403 14:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:22:42.403 14:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:42.403 14:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:42.403 14:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:22:42.403 14:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:22:42.403 14:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:42.403 14:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:42.403 14:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.403 14:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:42.403 14:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:42.403 14:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:42.403 14:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.403 [2024-12-11 14:59:25.161214] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:22:42.403 14:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:22:42.403 14:59:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:22:42.660 [2024-12-11 14:59:25.267265] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:22:42.660 [2024-12-11 14:59:25.267325] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:42.660 [2024-12-11 14:59:25.267341] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:42.660 [2024-12-11 14:59:25.267349] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:43.595 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:43.595 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:22:43.595 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:22:43.595 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:43.595 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:43.595 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:43.595 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.595 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:43.595 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:43.595 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.595 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:22:43.595 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:43.595 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:22:43.595 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:43.595 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:43.595 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:43.595 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:43.595 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:43.595 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:43.595 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:43.595 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:43.595 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:43.595 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.595 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:43.595 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.595 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:43.595 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:43.595 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:43.595 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:43.595 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:43.595 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.595 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:43.595 [2024-12-11 14:59:26.249971] bdev_nvme.c:7498:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:43.595 [2024-12-11 14:59:26.250001] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:43.595 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.595 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:43.595 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:43.595 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:43.595 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:43.595 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:43.595 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:22:43.595 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:43.596 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.596 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:43.596 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:43.596 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:43.596 [2024-12-11 14:59:26.256359] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:43.596 [2024-12-11 14:59:26.256392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.596 [2024-12-11 14:59:26.256424] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:43.596 [2024-12-11 14:59:26.256440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.596 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:43.596 [2024-12-11 14:59:26.256454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:43.596 [2024-12-11 14:59:26.256468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.596 [2024-12-11 14:59:26.256482] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:43.596 [2024-12-11 14:59:26.256495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.596 [2024-12-11 14:59:26.256508] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5b0d0 is same with the state(6) to be set 00:22:43.596 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.596 [2024-12-11 14:59:26.266349] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f5b0d0 (9): Bad file descriptor 00:22:43.596 [2024-12-11 14:59:26.276387] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:43.596 [2024-12-11 14:59:26.276409] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:43.596 [2024-12-11 14:59:26.276423] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:43.596 [2024-12-11 14:59:26.276432] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:43.596 [2024-12-11 14:59:26.276481] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:43.596 [2024-12-11 14:59:26.276658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:43.596 [2024-12-11 14:59:26.276687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f5b0d0 with addr=10.0.0.2, port=4420 00:22:43.596 [2024-12-11 14:59:26.276704] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5b0d0 is same with the state(6) to be set 00:22:43.596 [2024-12-11 14:59:26.276728] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f5b0d0 (9): Bad file descriptor 00:22:43.596 [2024-12-11 14:59:26.276749] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:43.596 [2024-12-11 14:59:26.276762] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:43.596 [2024-12-11 14:59:26.276777] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:43.596 [2024-12-11 14:59:26.276789] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:43.596 [2024-12-11 14:59:26.276800] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:43.596 [2024-12-11 14:59:26.276808] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:43.596 [2024-12-11 14:59:26.286514] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:43.596 [2024-12-11 14:59:26.286556] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:43.596 [2024-12-11 14:59:26.286567] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:43.596 [2024-12-11 14:59:26.286575] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:43.596 [2024-12-11 14:59:26.286614] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:43.596 [2024-12-11 14:59:26.286761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:43.596 [2024-12-11 14:59:26.286789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f5b0d0 with addr=10.0.0.2, port=4420 00:22:43.596 [2024-12-11 14:59:26.286805] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5b0d0 is same with the state(6) to be set 00:22:43.596 [2024-12-11 14:59:26.286827] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f5b0d0 (9): Bad file descriptor 00:22:43.596 [2024-12-11 14:59:26.286847] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:43.596 [2024-12-11 14:59:26.286860] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:43.596 [2024-12-11 14:59:26.286873] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:43.596 [2024-12-11 14:59:26.286885] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:43.596 [2024-12-11 14:59:26.286894] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:43.596 [2024-12-11 14:59:26.286901] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:43.596 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:43.596 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:43.596 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:43.596 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:43.596 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:43.596 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:43.596 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:43.596 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:22:43.596 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:43.596 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.596 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:43.596 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:43.596 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:43.596 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:43.596 [2024-12-11 14:59:26.296649] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:43.596 [2024-12-11 14:59:26.296674] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:43.596 [2024-12-11 14:59:26.296684] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:43.596 [2024-12-11 14:59:26.296693] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:43.596 [2024-12-11 14:59:26.296719] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:43.596 [2024-12-11 14:59:26.296872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:43.596 [2024-12-11 14:59:26.296899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f5b0d0 with addr=10.0.0.2, port=4420 00:22:43.596 [2024-12-11 14:59:26.296926] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5b0d0 is same with the state(6) to be set 00:22:43.596 [2024-12-11 14:59:26.296948] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f5b0d0 (9): Bad file descriptor 00:22:43.596 [2024-12-11 14:59:26.296969] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:43.596 [2024-12-11 14:59:26.296984] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:43.596 [2024-12-11 14:59:26.296998] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:43.596 [2024-12-11 14:59:26.297010] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:43.596 [2024-12-11 14:59:26.297019] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:43.596 [2024-12-11 14:59:26.297027] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:43.596 [2024-12-11 14:59:26.306754] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:43.596 [2024-12-11 14:59:26.306778] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:43.596 [2024-12-11 14:59:26.306788] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:43.596 [2024-12-11 14:59:26.306796] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:43.596 [2024-12-11 14:59:26.306823] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:43.596 [2024-12-11 14:59:26.306978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:43.596 [2024-12-11 14:59:26.307006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f5b0d0 with addr=10.0.0.2, port=4420 00:22:43.596 [2024-12-11 14:59:26.307028] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5b0d0 is same with the state(6) to be set 00:22:43.596 [2024-12-11 14:59:26.307050] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f5b0d0 (9): Bad file descriptor 00:22:43.596 [2024-12-11 14:59:26.307072] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:43.596 [2024-12-11 14:59:26.307085] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:43.596 [2024-12-11 14:59:26.307098] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:43.596 [2024-12-11 14:59:26.307110] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:43.596 [2024-12-11 14:59:26.307119] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:43.596 [2024-12-11 14:59:26.307127] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:43.596 [2024-12-11 14:59:26.316858] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:43.596 [2024-12-11 14:59:26.316893] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:43.596 [2024-12-11 14:59:26.316903] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:43.596 [2024-12-11 14:59:26.316910] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:43.597 [2024-12-11 14:59:26.316934] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:43.597 [2024-12-11 14:59:26.317078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:43.597 [2024-12-11 14:59:26.317105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f5b0d0 with addr=10.0.0.2, port=4420 00:22:43.597 [2024-12-11 14:59:26.317121] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5b0d0 is same with the state(6) to be set 00:22:43.597 [2024-12-11 14:59:26.317142] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f5b0d0 (9): Bad file descriptor 00:22:43.597 [2024-12-11 14:59:26.317163] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:43.597 [2024-12-11 14:59:26.317177] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:43.597 [2024-12-11 14:59:26.317189] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:43.597 [2024-12-11 14:59:26.317201] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:43.597 [2024-12-11 14:59:26.317210] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:43.597 [2024-12-11 14:59:26.317217] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:43.597 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.597 [2024-12-11 14:59:26.326968] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:43.597 [2024-12-11 14:59:26.326988] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:43.597 [2024-12-11 14:59:26.326997] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:43.597 [2024-12-11 14:59:26.327004] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:43.597 [2024-12-11 14:59:26.327042] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:43.597 [2024-12-11 14:59:26.327182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:43.597 [2024-12-11 14:59:26.327209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f5b0d0 with addr=10.0.0.2, port=4420 00:22:43.597 [2024-12-11 14:59:26.327224] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5b0d0 is same with the state(6) to be set 00:22:43.597 [2024-12-11 14:59:26.327246] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f5b0d0 (9): Bad file descriptor 00:22:43.597 [2024-12-11 14:59:26.327266] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:43.597 [2024-12-11 14:59:26.327279] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:43.597 [2024-12-11 14:59:26.327292] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:43.597 [2024-12-11 14:59:26.327303] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:43.597 [2024-12-11 14:59:26.327312] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:43.597 [2024-12-11 14:59:26.327319] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:43.597 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:43.597 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:43.597 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:22:43.597 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:22:43.597 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:43.597 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:43.597 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:22:43.597 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:22:43.597 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:43.597 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.597 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:43.597 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:43.597 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:43.597 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:43.597 [2024-12-11 14:59:26.336393] bdev_nvme.c:7303:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:22:43.597 [2024-12-11 14:59:26.336422] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:43.597 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.855 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:22:43.855 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:43.855 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:22:43.855 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:43.855 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:43.855 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:43.855 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:43.855 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:43.855 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:43.855 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:43.855 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:43.855 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:43.855 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.855 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:43.855 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.855 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:43.855 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:43.855 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:43.855 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:43.855 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:22:43.855 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.855 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:43.855 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.855 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:22:43.855 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:22:43.855 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:43.855 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:43.856 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:22:43.856 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:22:43.856 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:43.856 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.856 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:43.856 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:43.856 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:43.856 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:43.856 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.856 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:22:43.856 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:43.856 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:22:43.856 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:22:43.856 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:43.856 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:43.856 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:22:43.856 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:22:43.856 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:43.856 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.856 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:43.856 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:43.856 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:43.856 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:43.856 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.856 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:22:43.856 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:43.856 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:22:43.856 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:22:43.856 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:43.856 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:43.856 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:43.856 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:43.856 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:43.856 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:43.856 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:43.856 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.856 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:43.856 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:43.856 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.856 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:22:43.856 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:22:43.856 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:43.856 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:43.856 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:43.856 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.856 14:59:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:45.227 [2024-12-11 14:59:27.591164] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:45.227 [2024-12-11 14:59:27.591189] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:45.227 [2024-12-11 14:59:27.591217] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:45.227 [2024-12-11 14:59:27.718625] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:22:45.484 [2024-12-11 14:59:28.025112] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:22:45.484 [2024-12-11 14:59:28.025856] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x1f8fbf0:1 started. 00:22:45.484 [2024-12-11 14:59:28.027912] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:45.484 [2024-12-11 14:59:28.027943] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:45.484 14:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.484 14:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:45.484 14:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:22:45.484 14:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:45.484 14:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:45.484 [2024-12-11 14:59:28.030240] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x1f8fbf0 was disconnected and freed. delete nvme_qpair. 00:22:45.484 14:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:45.484 14:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:45.484 14:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:45.484 14:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:45.484 14:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.484 14:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:45.484 request: 00:22:45.484 { 00:22:45.484 "name": "nvme", 00:22:45.484 "trtype": "tcp", 00:22:45.484 "traddr": "10.0.0.2", 00:22:45.484 "adrfam": "ipv4", 00:22:45.484 "trsvcid": "8009", 00:22:45.484 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:45.484 "wait_for_attach": true, 00:22:45.484 "method": "bdev_nvme_start_discovery", 00:22:45.484 "req_id": 1 00:22:45.484 } 00:22:45.484 Got JSON-RPC error response 00:22:45.484 response: 00:22:45.484 { 00:22:45.484 "code": -17, 00:22:45.484 "message": "File exists" 00:22:45.484 } 00:22:45.484 14:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:45.484 14:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:22:45.484 14:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:45.484 14:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:45.484 14:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:45.484 14:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:22:45.484 14:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:45.484 14:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:45.484 14:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.484 14:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:45.485 14:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:45.485 14:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:45.485 14:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.485 14:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:22:45.485 14:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:22:45.485 14:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:45.485 14:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.485 14:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:45.485 14:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:45.485 14:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:45.485 14:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:45.485 14:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.485 14:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:45.485 14:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:45.485 14:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:22:45.485 14:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:45.485 14:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:45.485 14:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:45.485 14:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:45.485 14:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:45.485 14:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:45.485 14:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.485 14:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:45.485 request: 00:22:45.485 { 00:22:45.485 "name": "nvme_second", 00:22:45.485 "trtype": "tcp", 00:22:45.485 "traddr": "10.0.0.2", 00:22:45.485 "adrfam": "ipv4", 00:22:45.485 "trsvcid": "8009", 00:22:45.485 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:45.485 "wait_for_attach": true, 00:22:45.485 "method": "bdev_nvme_start_discovery", 00:22:45.485 "req_id": 1 00:22:45.485 } 00:22:45.485 Got JSON-RPC error response 00:22:45.485 response: 00:22:45.485 { 00:22:45.485 "code": -17, 00:22:45.485 "message": "File exists" 00:22:45.485 } 00:22:45.485 14:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:45.485 14:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:22:45.485 14:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:45.485 14:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:45.485 14:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:45.485 14:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:22:45.485 14:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:45.485 14:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:45.485 14:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.485 14:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:45.485 14:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:45.485 14:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:45.485 14:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.485 14:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:22:45.485 14:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:22:45.485 14:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:45.485 14:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.485 14:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:45.485 14:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:45.485 14:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:45.485 14:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:45.485 14:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.485 14:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:45.485 14:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:45.485 14:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:22:45.485 14:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:45.485 14:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:45.485 14:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:45.485 14:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:45.485 14:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:45.485 14:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:45.485 14:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.485 14:59:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:46.857 [2024-12-11 14:59:29.219271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:46.857 [2024-12-11 14:59:29.219325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f5bdf0 with addr=10.0.0.2, port=8010 00:22:46.857 [2024-12-11 14:59:29.219352] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:46.857 [2024-12-11 14:59:29.219366] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:46.857 [2024-12-11 14:59:29.219378] bdev_nvme.c:7584:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:22:47.789 [2024-12-11 14:59:30.221889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:47.789 [2024-12-11 14:59:30.221955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f5bdf0 with addr=10.0.0.2, port=8010 00:22:47.789 [2024-12-11 14:59:30.221989] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:47.789 [2024-12-11 14:59:30.222006] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:47.789 [2024-12-11 14:59:30.222019] bdev_nvme.c:7584:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:22:48.723 [2024-12-11 14:59:31.223979] bdev_nvme.c:7559:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:22:48.723 request: 00:22:48.723 { 00:22:48.723 "name": "nvme_second", 00:22:48.723 "trtype": "tcp", 00:22:48.723 "traddr": "10.0.0.2", 00:22:48.723 "adrfam": "ipv4", 00:22:48.723 "trsvcid": "8010", 00:22:48.723 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:48.723 "wait_for_attach": false, 00:22:48.723 "attach_timeout_ms": 3000, 00:22:48.723 "method": "bdev_nvme_start_discovery", 00:22:48.723 "req_id": 1 00:22:48.723 } 00:22:48.723 Got JSON-RPC error response 00:22:48.723 response: 00:22:48.723 { 00:22:48.723 "code": -110, 00:22:48.723 "message": "Connection timed out" 00:22:48.723 } 00:22:48.723 14:59:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:48.723 14:59:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:22:48.723 14:59:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:48.723 14:59:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:48.723 14:59:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:48.723 14:59:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:22:48.723 14:59:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:48.723 14:59:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:48.723 14:59:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.723 14:59:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:48.723 14:59:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:48.723 14:59:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:48.723 14:59:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.723 14:59:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:22:48.723 14:59:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:22:48.723 14:59:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 746094 00:22:48.723 14:59:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:22:48.723 14:59:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:48.723 14:59:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:22:48.723 14:59:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:48.723 14:59:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:22:48.723 14:59:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:48.723 14:59:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:48.723 rmmod nvme_tcp 00:22:48.723 rmmod nvme_fabrics 00:22:48.723 rmmod nvme_keyring 00:22:48.723 14:59:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:48.723 14:59:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:22:48.723 14:59:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:22:48.723 14:59:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 746067 ']' 00:22:48.723 14:59:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 746067 00:22:48.723 14:59:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 746067 ']' 00:22:48.723 14:59:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 746067 00:22:48.723 14:59:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:22:48.723 14:59:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:48.723 14:59:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 746067 00:22:48.723 14:59:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:48.723 14:59:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:48.723 14:59:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 746067' 00:22:48.723 killing process with pid 746067 00:22:48.723 14:59:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 746067 00:22:48.723 14:59:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 746067 00:22:48.982 14:59:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:48.982 14:59:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:48.982 14:59:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:48.982 14:59:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:22:48.982 14:59:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:22:48.982 14:59:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:48.982 14:59:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:22:48.982 14:59:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:48.982 14:59:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:48.983 14:59:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:48.983 14:59:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:48.983 14:59:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:50.891 14:59:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:50.891 00:22:50.891 real 0m13.467s 00:22:50.891 user 0m19.320s 00:22:50.891 sys 0m2.828s 00:22:50.891 14:59:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:50.891 14:59:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:50.891 ************************************ 00:22:50.891 END TEST nvmf_host_discovery 00:22:50.891 ************************************ 00:22:51.150 14:59:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:22:51.150 14:59:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:51.150 14:59:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:51.150 14:59:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:51.150 ************************************ 00:22:51.150 START TEST nvmf_host_multipath_status 00:22:51.150 ************************************ 00:22:51.150 14:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:22:51.150 * Looking for test storage... 00:22:51.150 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:51.150 14:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:51.150 14:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:22:51.150 14:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:51.150 14:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:51.150 14:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:51.150 14:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:51.150 14:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:51.150 14:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:22:51.150 14:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:22:51.150 14:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:22:51.150 14:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:22:51.150 14:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:22:51.150 14:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:22:51.150 14:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:22:51.150 14:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:51.150 14:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:22:51.150 14:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:22:51.150 14:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:51.150 14:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:51.150 14:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:22:51.150 14:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:22:51.150 14:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:51.150 14:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:22:51.150 14:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:22:51.150 14:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:22:51.150 14:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:22:51.150 14:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:51.150 14:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:22:51.150 14:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:22:51.150 14:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:51.150 14:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:51.150 14:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:22:51.150 14:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:51.150 14:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:51.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:51.150 --rc genhtml_branch_coverage=1 00:22:51.150 --rc genhtml_function_coverage=1 00:22:51.150 --rc genhtml_legend=1 00:22:51.150 --rc geninfo_all_blocks=1 00:22:51.150 --rc geninfo_unexecuted_blocks=1 00:22:51.151 00:22:51.151 ' 00:22:51.151 14:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:51.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:51.151 --rc genhtml_branch_coverage=1 00:22:51.151 --rc genhtml_function_coverage=1 00:22:51.151 --rc genhtml_legend=1 00:22:51.151 --rc geninfo_all_blocks=1 00:22:51.151 --rc geninfo_unexecuted_blocks=1 00:22:51.151 00:22:51.151 ' 00:22:51.151 14:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:51.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:51.151 --rc genhtml_branch_coverage=1 00:22:51.151 --rc genhtml_function_coverage=1 00:22:51.151 --rc genhtml_legend=1 00:22:51.151 --rc geninfo_all_blocks=1 00:22:51.151 --rc geninfo_unexecuted_blocks=1 00:22:51.151 00:22:51.151 ' 00:22:51.151 14:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:51.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:51.151 --rc genhtml_branch_coverage=1 00:22:51.151 --rc genhtml_function_coverage=1 00:22:51.151 --rc genhtml_legend=1 00:22:51.151 --rc geninfo_all_blocks=1 00:22:51.151 --rc geninfo_unexecuted_blocks=1 00:22:51.151 00:22:51.151 ' 00:22:51.151 14:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:51.151 14:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:22:51.151 14:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:51.151 14:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:51.151 14:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:51.151 14:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:51.151 14:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:51.151 14:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:51.151 14:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:51.151 14:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:51.151 14:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:51.151 14:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:51.151 14:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:51.151 14:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:51.151 14:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:51.151 14:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:51.151 14:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:51.151 14:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:51.151 14:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:51.151 14:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:22:51.151 14:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:51.151 14:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:51.151 14:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:51.151 14:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.151 14:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.151 14:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.151 14:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:22:51.151 14:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.151 14:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:22:51.151 14:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:51.151 14:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:51.151 14:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:51.151 14:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:51.151 14:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:51.151 14:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:51.151 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:51.151 14:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:51.151 14:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:51.151 14:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:51.151 14:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:51.151 14:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:51.151 14:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:51.151 14:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:22:51.151 14:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:51.151 14:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:22:51.151 14:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:22:51.151 14:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:51.151 14:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:51.151 14:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:51.151 14:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:51.151 14:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:51.151 14:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:51.151 14:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:51.151 14:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:51.151 14:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:51.151 14:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:51.151 14:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:22:51.151 14:59:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:53.686 14:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:53.686 14:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:22:53.686 14:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:53.686 14:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:53.686 14:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:53.686 14:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:53.686 14:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:53.686 14:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:22:53.686 14:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:53.686 14:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:22:53.686 14:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:22:53.686 14:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:22:53.686 14:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:22:53.686 14:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:22:53.686 14:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:22:53.686 14:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:53.686 14:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:53.686 14:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:53.686 14:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:53.686 14:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:53.686 14:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:53.686 14:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:53.686 14:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:53.686 14:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:53.686 14:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:53.686 14:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:53.686 14:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:53.686 14:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:53.686 14:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:53.686 14:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:53.686 14:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:53.686 14:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:53.686 14:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:53.686 14:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:53.686 14:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:53.686 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:53.686 14:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:53.686 14:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:53.686 14:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:53.686 14:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:53.686 14:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:53.686 14:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:53.686 14:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:53.686 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:53.686 14:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:53.686 14:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:53.686 14:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:53.686 14:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:53.686 14:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:53.686 14:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:53.686 14:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:53.686 14:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:53.686 14:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:53.686 14:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:53.686 14:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:53.686 14:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:53.686 14:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:53.686 14:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:53.686 14:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:53.686 14:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:53.686 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:53.686 14:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:53.686 14:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:53.686 14:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:53.686 14:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:53.686 14:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:53.686 14:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:53.686 14:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:53.686 14:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:53.686 14:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:53.686 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:53.686 14:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:53.686 14:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:53.686 14:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:22:53.686 14:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:53.686 14:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:53.686 14:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:53.686 14:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:53.686 14:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:53.686 14:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:53.686 14:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:53.686 14:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:53.686 14:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:53.686 14:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:53.686 14:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:53.686 14:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:53.686 14:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:53.686 14:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:53.686 14:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:53.686 14:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:53.686 14:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:53.686 14:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:53.686 14:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:53.686 14:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:53.686 14:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:53.686 14:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:53.686 14:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:53.686 14:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:53.686 14:59:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:53.686 14:59:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:53.686 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:53.686 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.245 ms 00:22:53.686 00:22:53.686 --- 10.0.0.2 ping statistics --- 00:22:53.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:53.686 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:22:53.686 14:59:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:53.686 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:53.686 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:22:53.686 00:22:53.686 --- 10.0.0.1 ping statistics --- 00:22:53.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:53.687 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:22:53.687 14:59:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:53.687 14:59:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:22:53.687 14:59:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:53.687 14:59:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:53.687 14:59:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:53.687 14:59:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:53.687 14:59:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:53.687 14:59:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:53.687 14:59:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:53.687 14:59:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:22:53.687 14:59:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:53.687 14:59:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:53.687 14:59:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:53.687 14:59:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=749248 00:22:53.687 14:59:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:22:53.687 14:59:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 749248 00:22:53.687 14:59:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 749248 ']' 00:22:53.687 14:59:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:53.687 14:59:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:53.687 14:59:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:53.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:53.687 14:59:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:53.687 14:59:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:53.687 [2024-12-11 14:59:36.093274] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:22:53.687 [2024-12-11 14:59:36.093375] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:53.687 [2024-12-11 14:59:36.170521] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:53.687 [2024-12-11 14:59:36.227897] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:53.687 [2024-12-11 14:59:36.227954] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:53.687 [2024-12-11 14:59:36.227982] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:53.687 [2024-12-11 14:59:36.227994] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:53.687 [2024-12-11 14:59:36.228004] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:53.687 [2024-12-11 14:59:36.231568] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:22:53.687 [2024-12-11 14:59:36.231573] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:22:53.687 14:59:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:53.687 14:59:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:22:53.687 14:59:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:53.687 14:59:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:53.687 14:59:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:53.687 14:59:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:53.687 14:59:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=749248 00:22:53.687 14:59:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:53.945 [2024-12-11 14:59:36.680048] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:53.945 14:59:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:54.511 Malloc0 00:22:54.511 14:59:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:22:54.770 14:59:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:55.028 14:59:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:55.286 [2024-12-11 14:59:37.954876] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:55.286 14:59:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:55.543 [2024-12-11 14:59:38.283753] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:55.543 14:59:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=749533 00:22:55.543 14:59:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:22:55.543 14:59:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:55.543 14:59:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 749533 /var/tmp/bdevperf.sock 00:22:55.543 14:59:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 749533 ']' 00:22:55.543 14:59:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:55.543 14:59:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:55.543 14:59:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:55.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:55.543 14:59:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:55.543 14:59:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:56.109 14:59:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:56.110 14:59:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:22:56.110 14:59:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:56.110 14:59:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:22:56.676 Nvme0n1 00:22:56.676 14:59:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:22:57.242 Nvme0n1 00:22:57.242 14:59:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:22:57.242 14:59:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:22:59.143 14:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:22:59.143 14:59:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:22:59.401 14:59:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:59.659 14:59:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:23:00.594 14:59:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:23:00.594 14:59:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:00.594 14:59:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:00.594 14:59:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:00.852 14:59:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:00.852 14:59:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:00.852 14:59:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:00.852 14:59:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:01.110 14:59:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:01.110 14:59:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:01.110 14:59:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:01.110 14:59:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:01.677 14:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:01.677 14:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:01.677 14:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:01.677 14:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:01.677 14:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:01.677 14:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:01.677 14:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:01.677 14:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:01.936 14:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:01.936 14:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:01.936 14:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:01.936 14:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:02.501 14:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:02.501 14:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:23:02.501 14:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:02.501 14:59:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:03.067 14:59:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:23:04.001 14:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:23:04.001 14:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:04.001 14:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:04.001 14:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:04.260 14:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:04.260 14:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:04.260 14:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:04.260 14:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:04.518 14:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:04.518 14:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:04.518 14:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:04.518 14:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:04.776 14:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:04.776 14:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:04.776 14:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:04.776 14:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:05.035 14:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:05.035 14:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:05.035 14:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:05.035 14:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:05.293 14:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:05.293 14:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:05.293 14:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:05.293 14:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:05.552 14:59:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:05.552 14:59:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:23:05.552 14:59:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:05.810 14:59:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:23:06.069 14:59:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:23:07.003 14:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:23:07.003 14:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:07.003 14:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:07.003 14:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:07.570 14:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:07.570 14:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:07.570 14:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:07.570 14:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:07.570 14:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:07.570 14:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:07.570 14:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:07.570 14:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:07.828 14:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:07.828 14:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:07.828 14:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:07.828 14:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:08.394 14:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:08.394 14:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:08.394 14:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:08.394 14:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:08.394 14:59:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:08.394 14:59:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:08.394 14:59:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:08.394 14:59:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:08.653 14:59:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:08.653 14:59:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:23:08.653 14:59:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:09.218 14:59:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:09.218 14:59:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:23:10.647 14:59:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:23:10.647 14:59:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:10.647 14:59:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:10.647 14:59:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:10.647 14:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:10.647 14:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:10.647 14:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:10.647 14:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:10.930 14:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:10.930 14:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:10.930 14:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:10.930 14:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:11.187 14:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:11.188 14:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:11.188 14:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:11.188 14:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:11.444 14:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:11.444 14:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:11.444 14:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:11.444 14:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:11.702 14:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:11.702 14:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:11.702 14:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:11.702 14:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:11.960 14:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:11.960 14:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:23:11.960 14:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:12.256 14:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:12.514 14:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:23:13.446 14:59:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:23:13.446 14:59:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:13.446 14:59:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:13.446 14:59:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:14.011 14:59:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:14.011 14:59:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:14.011 14:59:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:14.011 14:59:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:14.011 14:59:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:14.011 14:59:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:14.011 14:59:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:14.011 14:59:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:14.269 14:59:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:14.269 14:59:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:14.269 14:59:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:14.269 14:59:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:14.527 14:59:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:14.527 14:59:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:23:14.784 14:59:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:14.784 14:59:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:15.042 14:59:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:15.042 14:59:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:15.042 14:59:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:15.042 14:59:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:15.299 14:59:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:15.299 14:59:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:23:15.299 14:59:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:15.556 14:59:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:15.814 14:59:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:23:16.747 14:59:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:23:16.747 14:59:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:16.747 14:59:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:16.747 14:59:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:17.005 14:59:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:17.005 14:59:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:17.005 14:59:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:17.005 14:59:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:17.263 14:59:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:17.263 14:59:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:17.263 14:59:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:17.263 14:59:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:17.521 15:00:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:17.521 15:00:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:17.521 15:00:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:17.521 15:00:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:17.779 15:00:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:17.779 15:00:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:23:17.779 15:00:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:17.779 15:00:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:18.345 15:00:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:18.345 15:00:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:18.345 15:00:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:18.345 15:00:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:18.603 15:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:18.603 15:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:23:18.860 15:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:23:18.861 15:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:23:19.119 15:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:19.377 15:00:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:23:20.311 15:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:23:20.311 15:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:20.311 15:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:20.311 15:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:20.573 15:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:20.573 15:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:20.573 15:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:20.573 15:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:20.832 15:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:20.832 15:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:20.832 15:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:20.832 15:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:21.399 15:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:21.399 15:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:21.399 15:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:21.399 15:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:21.399 15:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:21.399 15:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:21.399 15:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:21.399 15:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:21.964 15:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:21.964 15:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:21.964 15:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:21.964 15:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:21.964 15:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:21.964 15:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:23:21.964 15:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:22.536 15:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:22.536 15:00:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:23:23.909 15:00:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:23:23.909 15:00:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:23.909 15:00:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:23.909 15:00:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:23.909 15:00:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:23.909 15:00:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:23.909 15:00:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:23.909 15:00:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:24.167 15:00:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:24.167 15:00:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:24.167 15:00:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:24.167 15:00:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:24.425 15:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:24.425 15:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:24.425 15:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:24.425 15:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:24.683 15:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:24.683 15:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:24.683 15:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:24.683 15:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:24.941 15:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:24.941 15:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:24.941 15:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:24.941 15:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:25.507 15:00:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:25.507 15:00:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:23:25.507 15:00:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:25.765 15:00:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:23:26.023 15:00:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:23:26.958 15:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:23:26.958 15:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:26.958 15:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:26.958 15:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:27.216 15:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:27.216 15:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:27.216 15:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:27.216 15:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:27.474 15:00:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:27.474 15:00:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:27.474 15:00:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:27.474 15:00:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:27.732 15:00:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:27.732 15:00:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:27.732 15:00:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:27.732 15:00:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:27.990 15:00:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:27.990 15:00:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:27.990 15:00:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:27.990 15:00:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:28.248 15:00:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:28.248 15:00:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:28.248 15:00:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:28.248 15:00:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:28.506 15:00:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:28.506 15:00:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:23:28.506 15:00:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:28.764 15:00:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:29.022 15:00:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:23:30.395 15:00:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:23:30.395 15:00:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:30.395 15:00:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:30.395 15:00:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:30.396 15:00:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:30.396 15:00:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:30.396 15:00:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:30.396 15:00:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:30.653 15:00:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:30.653 15:00:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:30.653 15:00:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:30.653 15:00:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:30.911 15:00:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:30.911 15:00:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:30.911 15:00:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:30.911 15:00:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:31.168 15:00:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:31.168 15:00:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:31.168 15:00:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:31.169 15:00:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:31.426 15:00:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:31.426 15:00:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:31.427 15:00:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:31.427 15:00:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:31.685 15:00:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:31.685 15:00:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 749533 00:23:31.685 15:00:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 749533 ']' 00:23:31.943 15:00:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 749533 00:23:31.943 15:00:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:23:31.943 15:00:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:31.943 15:00:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 749533 00:23:31.943 15:00:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:31.943 15:00:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:31.943 15:00:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 749533' 00:23:31.943 killing process with pid 749533 00:23:31.943 15:00:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 749533 00:23:31.943 15:00:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 749533 00:23:31.943 { 00:23:31.943 "results": [ 00:23:31.943 { 00:23:31.943 "job": "Nvme0n1", 00:23:31.943 "core_mask": "0x4", 00:23:31.943 "workload": "verify", 00:23:31.943 "status": "terminated", 00:23:31.943 "verify_range": { 00:23:31.943 "start": 0, 00:23:31.943 "length": 16384 00:23:31.943 }, 00:23:31.943 "queue_depth": 128, 00:23:31.943 "io_size": 4096, 00:23:31.943 "runtime": 34.575788, 00:23:31.943 "iops": 7891.186746054783, 00:23:31.943 "mibps": 30.824948226776495, 00:23:31.943 "io_failed": 0, 00:23:31.943 "io_timeout": 0, 00:23:31.943 "avg_latency_us": 16194.282925454077, 00:23:31.943 "min_latency_us": 168.39111111111112, 00:23:31.943 "max_latency_us": 4026531.84 00:23:31.943 } 00:23:31.943 ], 00:23:31.943 "core_count": 1 00:23:31.943 } 00:23:32.212 15:00:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 749533 00:23:32.212 15:00:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:32.212 [2024-12-11 14:59:38.351371] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:23:32.212 [2024-12-11 14:59:38.351455] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid749533 ] 00:23:32.213 [2024-12-11 14:59:38.418279] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:32.213 [2024-12-11 14:59:38.475633] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:32.213 Running I/O for 90 seconds... 00:23:32.213 8275.00 IOPS, 32.32 MiB/s [2024-12-11T14:00:14.986Z] 8340.00 IOPS, 32.58 MiB/s [2024-12-11T14:00:14.986Z] 8339.00 IOPS, 32.57 MiB/s [2024-12-11T14:00:14.986Z] 8369.75 IOPS, 32.69 MiB/s [2024-12-11T14:00:14.986Z] 8349.20 IOPS, 32.61 MiB/s [2024-12-11T14:00:14.986Z] 8378.33 IOPS, 32.73 MiB/s [2024-12-11T14:00:14.986Z] 8383.00 IOPS, 32.75 MiB/s [2024-12-11T14:00:14.986Z] 8397.25 IOPS, 32.80 MiB/s [2024-12-11T14:00:14.986Z] 8395.56 IOPS, 32.80 MiB/s [2024-12-11T14:00:14.986Z] 8402.10 IOPS, 32.82 MiB/s [2024-12-11T14:00:14.986Z] 8393.00 IOPS, 32.79 MiB/s [2024-12-11T14:00:14.986Z] 8370.08 IOPS, 32.70 MiB/s [2024-12-11T14:00:14.986Z] 8369.69 IOPS, 32.69 MiB/s [2024-12-11T14:00:14.986Z] 8380.57 IOPS, 32.74 MiB/s [2024-12-11T14:00:14.986Z] 8380.67 IOPS, 32.74 MiB/s [2024-12-11T14:00:14.986Z] [2024-12-11 14:59:54.910124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:89232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.213 [2024-12-11 14:59:54.910190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:32.213 [2024-12-11 14:59:54.910262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:89256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.213 [2024-12-11 14:59:54.910283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:32.213 [2024-12-11 14:59:54.910307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:89264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.213 [2024-12-11 14:59:54.910324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:32.213 [2024-12-11 14:59:54.910347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:89272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.213 [2024-12-11 14:59:54.910364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:32.213 [2024-12-11 14:59:54.910385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.213 [2024-12-11 14:59:54.910401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:32.213 [2024-12-11 14:59:54.910423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:89288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.213 [2024-12-11 14:59:54.910439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:32.213 [2024-12-11 14:59:54.910461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:89296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.213 [2024-12-11 14:59:54.910477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:32.213 [2024-12-11 14:59:54.910499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:89304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.213 [2024-12-11 14:59:54.910515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:32.213 [2024-12-11 14:59:54.910537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:89312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.213 [2024-12-11 14:59:54.910579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:32.213 [2024-12-11 14:59:54.910620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:89320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.213 [2024-12-11 14:59:54.910637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:32.213 [2024-12-11 14:59:54.910660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:89328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.213 [2024-12-11 14:59:54.910677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:32.213 [2024-12-11 14:59:54.910699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:89336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.213 [2024-12-11 14:59:54.910715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:32.213 [2024-12-11 14:59:54.910736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:89344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.213 [2024-12-11 14:59:54.910752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:32.213 [2024-12-11 14:59:54.910774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:89352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.213 [2024-12-11 14:59:54.910790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:32.213 [2024-12-11 14:59:54.910812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:89360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.213 [2024-12-11 14:59:54.910828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:32.213 [2024-12-11 14:59:54.910865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:89368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.213 [2024-12-11 14:59:54.910881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:32.213 [2024-12-11 14:59:54.910903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:89376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.213 [2024-12-11 14:59:54.910919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:32.213 [2024-12-11 14:59:54.910940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:89384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.213 [2024-12-11 14:59:54.910956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:32.213 [2024-12-11 14:59:54.910977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:89392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.213 [2024-12-11 14:59:54.910992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:32.213 [2024-12-11 14:59:54.911013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:89400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.213 [2024-12-11 14:59:54.911029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:32.213 [2024-12-11 14:59:54.911049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:89408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.213 [2024-12-11 14:59:54.911064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:32.213 [2024-12-11 14:59:54.911086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:89416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.213 [2024-12-11 14:59:54.911105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:32.213 [2024-12-11 14:59:54.911128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:89424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.213 [2024-12-11 14:59:54.911143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:32.213 [2024-12-11 14:59:54.911164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:89432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.213 [2024-12-11 14:59:54.911180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:32.213 [2024-12-11 14:59:54.911201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:89440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.213 [2024-12-11 14:59:54.911216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:32.213 [2024-12-11 14:59:54.911236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:89448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.213 [2024-12-11 14:59:54.911252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:32.213 [2024-12-11 14:59:54.911273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:89456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.213 [2024-12-11 14:59:54.911288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:32.213 [2024-12-11 14:59:54.911309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:89464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.213 [2024-12-11 14:59:54.911324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:32.213 [2024-12-11 14:59:54.911346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:89472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.213 [2024-12-11 14:59:54.911361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:32.213 [2024-12-11 14:59:54.911382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:89480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.213 [2024-12-11 14:59:54.911398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:32.213 [2024-12-11 14:59:54.911419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:89488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.213 [2024-12-11 14:59:54.911434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:32.213 [2024-12-11 14:59:54.911455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:89496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.213 [2024-12-11 14:59:54.911486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:32.213 [2024-12-11 14:59:54.911509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:89504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.213 [2024-12-11 14:59:54.911525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:32.213 [2024-12-11 14:59:54.911555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:89512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.213 [2024-12-11 14:59:54.911577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:32.213 [2024-12-11 14:59:54.912109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:89520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.213 [2024-12-11 14:59:54.912133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:32.214 [2024-12-11 14:59:54.912163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:89528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.214 [2024-12-11 14:59:54.912181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:32.214 [2024-12-11 14:59:54.912206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:89536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.214 [2024-12-11 14:59:54.912223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.214 [2024-12-11 14:59:54.912247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:89544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.214 [2024-12-11 14:59:54.912263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.214 [2024-12-11 14:59:54.912287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:89552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.214 [2024-12-11 14:59:54.912303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:32.214 [2024-12-11 14:59:54.912327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:89560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.214 [2024-12-11 14:59:54.912342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:32.214 [2024-12-11 14:59:54.912366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:89568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.214 [2024-12-11 14:59:54.912382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:32.214 [2024-12-11 14:59:54.912406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:89576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.214 [2024-12-11 14:59:54.912437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:32.214 [2024-12-11 14:59:54.912461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:89584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.214 [2024-12-11 14:59:54.912491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:32.214 [2024-12-11 14:59:54.912516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:89592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.214 [2024-12-11 14:59:54.912532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:32.214 [2024-12-11 14:59:54.912566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:89600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.214 [2024-12-11 14:59:54.912584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:32.214 [2024-12-11 14:59:54.912608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:89608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.214 [2024-12-11 14:59:54.912624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:32.214 [2024-12-11 14:59:54.912653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:89616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.214 [2024-12-11 14:59:54.912670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:32.214 [2024-12-11 14:59:54.912695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:89624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.214 [2024-12-11 14:59:54.912712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:32.214 [2024-12-11 14:59:54.912735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:89632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.214 [2024-12-11 14:59:54.912752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:32.214 [2024-12-11 14:59:54.912776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:89640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.214 [2024-12-11 14:59:54.912792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:32.214 [2024-12-11 14:59:54.912816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:89648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.214 [2024-12-11 14:59:54.912832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:32.214 [2024-12-11 14:59:54.912856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:89656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.214 [2024-12-11 14:59:54.912872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:32.214 [2024-12-11 14:59:54.912896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:89664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.214 [2024-12-11 14:59:54.912912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:32.214 [2024-12-11 14:59:54.912936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:89672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.214 [2024-12-11 14:59:54.912952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:32.214 [2024-12-11 14:59:54.912976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:89680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.214 [2024-12-11 14:59:54.912993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:32.214 [2024-12-11 14:59:54.913017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:89688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.214 [2024-12-11 14:59:54.913033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:32.214 [2024-12-11 14:59:54.913057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:89696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.214 [2024-12-11 14:59:54.913074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:32.214 [2024-12-11 14:59:54.913098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:89704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.214 [2024-12-11 14:59:54.913114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:32.214 [2024-12-11 14:59:54.913146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:89712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.214 [2024-12-11 14:59:54.913163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:32.214 [2024-12-11 14:59:54.913188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:89720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.214 [2024-12-11 14:59:54.913204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:32.214 [2024-12-11 14:59:54.913228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:89728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.214 [2024-12-11 14:59:54.913244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:32.214 [2024-12-11 14:59:54.913268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:89736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.214 [2024-12-11 14:59:54.913284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:32.214 [2024-12-11 14:59:54.913309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:89744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.214 [2024-12-11 14:59:54.913341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:32.214 [2024-12-11 14:59:54.913367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:89752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.214 [2024-12-11 14:59:54.913382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:32.214 [2024-12-11 14:59:54.913423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:89760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.214 [2024-12-11 14:59:54.913439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:32.214 [2024-12-11 14:59:54.913464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:89768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.214 [2024-12-11 14:59:54.913480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:32.214 [2024-12-11 14:59:54.913671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:89776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.214 [2024-12-11 14:59:54.913695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:32.214 [2024-12-11 14:59:54.913726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:89784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.214 [2024-12-11 14:59:54.913744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:32.214 [2024-12-11 14:59:54.913771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:89792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.214 [2024-12-11 14:59:54.913788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:32.214 [2024-12-11 14:59:54.913814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:89800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.214 [2024-12-11 14:59:54.913830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:32.214 [2024-12-11 14:59:54.913857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:89808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.214 [2024-12-11 14:59:54.913878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:32.214 [2024-12-11 14:59:54.913905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:89816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.214 [2024-12-11 14:59:54.913922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:32.214 [2024-12-11 14:59:54.913948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:89824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.214 [2024-12-11 14:59:54.913979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:32.214 [2024-12-11 14:59:54.914005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:89832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.214 [2024-12-11 14:59:54.914022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:32.214 [2024-12-11 14:59:54.914047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.215 [2024-12-11 14:59:54.914062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:32.215 [2024-12-11 14:59:54.914088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:89848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.215 [2024-12-11 14:59:54.914104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:32.215 [2024-12-11 14:59:54.914145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:89856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.215 [2024-12-11 14:59:54.914163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:32.215 [2024-12-11 14:59:54.914189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:89864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.215 [2024-12-11 14:59:54.914205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:32.215 [2024-12-11 14:59:54.914232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:89872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.215 [2024-12-11 14:59:54.914248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:32.215 [2024-12-11 14:59:54.914275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:89880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.215 [2024-12-11 14:59:54.914291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:32.215 [2024-12-11 14:59:54.914317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.215 [2024-12-11 14:59:54.914334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:32.215 [2024-12-11 14:59:54.914361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:89248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.215 [2024-12-11 14:59:54.914377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:32.215 [2024-12-11 14:59:54.914403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:89888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.215 [2024-12-11 14:59:54.914424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:32.215 [2024-12-11 14:59:54.914451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:89896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.215 [2024-12-11 14:59:54.914468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:32.215 [2024-12-11 14:59:54.914494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:89904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.215 [2024-12-11 14:59:54.914511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:32.215 [2024-12-11 14:59:54.914537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:89912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.215 [2024-12-11 14:59:54.914562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:32.215 [2024-12-11 14:59:54.914590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:89920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.215 [2024-12-11 14:59:54.914607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:32.215 [2024-12-11 14:59:54.914633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:89928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.215 [2024-12-11 14:59:54.914649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:32.215 [2024-12-11 14:59:54.914675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:89936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.215 [2024-12-11 14:59:54.914692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:32.215 [2024-12-11 14:59:54.914718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:89944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.215 [2024-12-11 14:59:54.914734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:32.215 [2024-12-11 14:59:54.914760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:89952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.215 [2024-12-11 14:59:54.914777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:32.215 [2024-12-11 14:59:54.914803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:89960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.215 [2024-12-11 14:59:54.914819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:32.215 [2024-12-11 14:59:54.914860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:89968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.215 [2024-12-11 14:59:54.914876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:32.215 [2024-12-11 14:59:54.914901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:89976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.215 [2024-12-11 14:59:54.914917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:32.215 [2024-12-11 14:59:54.914942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:89984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.215 [2024-12-11 14:59:54.914958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:32.215 [2024-12-11 14:59:54.914988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:89992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.215 [2024-12-11 14:59:54.915005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:32.215 [2024-12-11 14:59:54.915030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:90000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.215 [2024-12-11 14:59:54.915047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:32.215 [2024-12-11 14:59:54.915072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:90008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.215 [2024-12-11 14:59:54.915088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:32.215 [2024-12-11 14:59:54.915113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:90016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.215 [2024-12-11 14:59:54.915130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:32.215 [2024-12-11 14:59:54.915155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:90024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.215 [2024-12-11 14:59:54.915171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:32.215 [2024-12-11 14:59:54.915196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:90032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.215 [2024-12-11 14:59:54.915212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:32.215 [2024-12-11 14:59:54.915237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:90040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.215 [2024-12-11 14:59:54.915253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:32.215 [2024-12-11 14:59:54.915278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:90048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.215 [2024-12-11 14:59:54.915293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:32.215 [2024-12-11 14:59:54.915318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:90056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.215 [2024-12-11 14:59:54.915334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:32.215 [2024-12-11 14:59:54.915360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:90064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.215 [2024-12-11 14:59:54.915375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:32.215 [2024-12-11 14:59:54.915400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:90072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.215 [2024-12-11 14:59:54.915416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:32.215 [2024-12-11 14:59:54.915442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:90080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.215 [2024-12-11 14:59:54.915458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:32.215 [2024-12-11 14:59:54.915609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:90088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.215 [2024-12-11 14:59:54.915631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:32.215 [2024-12-11 14:59:54.915664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:90096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.215 [2024-12-11 14:59:54.915683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:32.215 [2024-12-11 14:59:54.915712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:90104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.215 [2024-12-11 14:59:54.915729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:32.215 [2024-12-11 14:59:54.915758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:90112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.215 [2024-12-11 14:59:54.915775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:32.215 [2024-12-11 14:59:54.915804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:90120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.215 [2024-12-11 14:59:54.915821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:32.215 [2024-12-11 14:59:54.915864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:90128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.215 [2024-12-11 14:59:54.915880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:32.215 [2024-12-11 14:59:54.915908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:90136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.215 [2024-12-11 14:59:54.915924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:32.216 [2024-12-11 14:59:54.915951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:90144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.216 [2024-12-11 14:59:54.915967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:32.216 [2024-12-11 14:59:54.915995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:90152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.216 [2024-12-11 14:59:54.916011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:32.216 [2024-12-11 14:59:54.916038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:90160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.216 [2024-12-11 14:59:54.916054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:32.216 [2024-12-11 14:59:54.916083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:90168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.216 [2024-12-11 14:59:54.916099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:32.216 [2024-12-11 14:59:54.916127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:90176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.216 [2024-12-11 14:59:54.916142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:32.216 [2024-12-11 14:59:54.916170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:90184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.216 [2024-12-11 14:59:54.916190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:32.216 [2024-12-11 14:59:54.916219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:90192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.216 [2024-12-11 14:59:54.916235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:32.216 [2024-12-11 14:59:54.916263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:90200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.216 [2024-12-11 14:59:54.916279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:32.216 [2024-12-11 14:59:54.916307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:90208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.216 [2024-12-11 14:59:54.916323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:32.216 [2024-12-11 14:59:54.916351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:90216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.216 [2024-12-11 14:59:54.916367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:32.216 [2024-12-11 14:59:54.916394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:90224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.216 [2024-12-11 14:59:54.916410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:32.216 [2024-12-11 14:59:54.916438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:90232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.216 [2024-12-11 14:59:54.916454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:32.216 [2024-12-11 14:59:54.916482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:90240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.216 [2024-12-11 14:59:54.916497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:32.216 [2024-12-11 14:59:54.916542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:90248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.216 [2024-12-11 14:59:54.916570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:32.216 7865.12 IOPS, 30.72 MiB/s [2024-12-11T14:00:14.989Z] 7402.47 IOPS, 28.92 MiB/s [2024-12-11T14:00:14.989Z] 6991.22 IOPS, 27.31 MiB/s [2024-12-11T14:00:14.989Z] 6623.26 IOPS, 25.87 MiB/s [2024-12-11T14:00:14.989Z] 6703.55 IOPS, 26.19 MiB/s [2024-12-11T14:00:14.989Z] 6777.10 IOPS, 26.47 MiB/s [2024-12-11T14:00:14.989Z] 6868.27 IOPS, 26.83 MiB/s [2024-12-11T14:00:14.989Z] 7053.65 IOPS, 27.55 MiB/s [2024-12-11T14:00:14.989Z] 7223.75 IOPS, 28.22 MiB/s [2024-12-11T14:00:14.989Z] 7377.76 IOPS, 28.82 MiB/s [2024-12-11T14:00:14.989Z] 7433.96 IOPS, 29.04 MiB/s [2024-12-11T14:00:14.989Z] 7464.78 IOPS, 29.16 MiB/s [2024-12-11T14:00:14.989Z] 7497.50 IOPS, 29.29 MiB/s [2024-12-11T14:00:14.989Z] 7555.03 IOPS, 29.51 MiB/s [2024-12-11T14:00:14.989Z] 7672.67 IOPS, 29.97 MiB/s [2024-12-11T14:00:14.989Z] 7781.03 IOPS, 30.39 MiB/s [2024-12-11T14:00:14.989Z] [2024-12-11 15:00:11.776956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:36648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.216 [2024-12-11 15:00:11.777047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:32.216 [2024-12-11 15:00:11.777086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:36664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.216 [2024-12-11 15:00:11.777105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:32.216 [2024-12-11 15:00:11.777145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:36680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.216 [2024-12-11 15:00:11.777162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:32.216 [2024-12-11 15:00:11.777184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:36696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.216 [2024-12-11 15:00:11.777200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:32.216 [2024-12-11 15:00:11.777223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:36712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.216 [2024-12-11 15:00:11.777239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:32.216 [2024-12-11 15:00:11.777261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:36728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.216 [2024-12-11 15:00:11.777276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:32.216 [2024-12-11 15:00:11.777298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:36744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.216 [2024-12-11 15:00:11.777314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:32.216 [2024-12-11 15:00:11.777336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:36760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.216 [2024-12-11 15:00:11.777351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:32.216 [2024-12-11 15:00:11.777373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:36776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.216 [2024-12-11 15:00:11.777389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:32.216 [2024-12-11 15:00:11.777411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:36792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.216 [2024-12-11 15:00:11.777442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:32.216 [2024-12-11 15:00:11.777465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:36808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.216 [2024-12-11 15:00:11.777481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:32.216 [2024-12-11 15:00:11.777503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:36824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.216 [2024-12-11 15:00:11.777519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:32.216 [2024-12-11 15:00:11.777541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:36840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.216 [2024-12-11 15:00:11.777567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:32.216 [2024-12-11 15:00:11.777589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:36856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.216 [2024-12-11 15:00:11.777606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:32.216 [2024-12-11 15:00:11.777628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.216 [2024-12-11 15:00:11.777649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:32.216 [2024-12-11 15:00:11.777672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:36888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.216 [2024-12-11 15:00:11.777688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:32.216 [2024-12-11 15:00:11.777710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:36904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.216 [2024-12-11 15:00:11.777726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:32.216 [2024-12-11 15:00:11.777748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.216 [2024-12-11 15:00:11.777765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:32.216 [2024-12-11 15:00:11.777787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:36936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.216 [2024-12-11 15:00:11.777803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:32.216 [2024-12-11 15:00:11.777824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:36952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.216 [2024-12-11 15:00:11.777841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:32.216 [2024-12-11 15:00:11.777864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:36968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.216 [2024-12-11 15:00:11.777880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:32.216 [2024-12-11 15:00:11.777902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:36984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.216 [2024-12-11 15:00:11.777918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:32.216 [2024-12-11 15:00:11.777941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:37000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.216 [2024-12-11 15:00:11.777957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:32.217 [2024-12-11 15:00:11.777979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.217 [2024-12-11 15:00:11.777995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:32.217 [2024-12-11 15:00:11.778017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:37032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.217 [2024-12-11 15:00:11.778033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:32.217 [2024-12-11 15:00:11.778070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:37048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.217 [2024-12-11 15:00:11.778086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:32.217 [2024-12-11 15:00:11.778107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:37064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.217 [2024-12-11 15:00:11.778126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:32.217 [2024-12-11 15:00:11.778148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:37080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.217 [2024-12-11 15:00:11.778164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:32.217 [2024-12-11 15:00:11.778186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:37088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.217 [2024-12-11 15:00:11.778201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:32.217 [2024-12-11 15:00:11.778222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:37104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.217 [2024-12-11 15:00:11.778237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:32.217 [2024-12-11 15:00:11.778259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:37120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.217 [2024-12-11 15:00:11.778275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:32.217 [2024-12-11 15:00:11.779344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.217 [2024-12-11 15:00:11.779369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:32.217 [2024-12-11 15:00:11.779397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:37152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.217 [2024-12-11 15:00:11.779415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:32.217 [2024-12-11 15:00:11.779437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:37168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.217 [2024-12-11 15:00:11.779453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:32.217 [2024-12-11 15:00:11.779474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:37184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.217 [2024-12-11 15:00:11.779491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:32.217 [2024-12-11 15:00:11.779512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:37200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.217 [2024-12-11 15:00:11.779543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:32.217 [2024-12-11 15:00:11.779577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:37216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.217 [2024-12-11 15:00:11.779594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:32.217 [2024-12-11 15:00:11.779616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:37232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.217 [2024-12-11 15:00:11.779633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:32.217 [2024-12-11 15:00:11.779654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:37248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.217 [2024-12-11 15:00:11.779670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:32.217 [2024-12-11 15:00:11.779697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:37264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.217 [2024-12-11 15:00:11.779714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:32.217 [2024-12-11 15:00:11.779737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:37280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.217 [2024-12-11 15:00:11.779753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:32.217 [2024-12-11 15:00:11.779774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:37296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.217 [2024-12-11 15:00:11.779790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:32.217 [2024-12-11 15:00:11.779812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:37312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.217 [2024-12-11 15:00:11.779828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:32.217 [2024-12-11 15:00:11.779850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:37328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.217 [2024-12-11 15:00:11.779867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:32.217 [2024-12-11 15:00:11.779888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:37344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.217 [2024-12-11 15:00:11.779904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:32.217 [2024-12-11 15:00:11.779926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:37360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.217 [2024-12-11 15:00:11.779942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:32.217 [2024-12-11 15:00:11.779964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:37376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.217 [2024-12-11 15:00:11.779980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:32.217 [2024-12-11 15:00:11.780001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:37392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.217 [2024-12-11 15:00:11.780017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:32.217 [2024-12-11 15:00:11.780039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:37408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.217 [2024-12-11 15:00:11.780054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:32.217 [2024-12-11 15:00:11.780076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:37424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.217 [2024-12-11 15:00:11.780092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:32.217 [2024-12-11 15:00:11.780114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:37440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.217 [2024-12-11 15:00:11.780131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:32.217 [2024-12-11 15:00:11.780157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:37456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.217 [2024-12-11 15:00:11.780173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:32.217 [2024-12-11 15:00:11.780196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:37472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.217 [2024-12-11 15:00:11.780212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:32.217 [2024-12-11 15:00:11.780234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:37488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.217 [2024-12-11 15:00:11.780250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:32.217 [2024-12-11 15:00:11.780272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:37504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.217 [2024-12-11 15:00:11.780288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:32.217 [2024-12-11 15:00:11.780310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:37520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.217 [2024-12-11 15:00:11.780326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.217 [2024-12-11 15:00:11.780348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:37536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.218 [2024-12-11 15:00:11.780364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.218 [2024-12-11 15:00:11.780386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:37552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.218 [2024-12-11 15:00:11.780401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:32.218 [2024-12-11 15:00:11.780423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:37568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.218 [2024-12-11 15:00:11.780440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:32.218 [2024-12-11 15:00:11.780462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.218 [2024-12-11 15:00:11.780478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:32.218 [2024-12-11 15:00:11.780500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:37600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.218 [2024-12-11 15:00:11.780516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:32.218 [2024-12-11 15:00:11.781079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:37616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.218 [2024-12-11 15:00:11.781103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:32.218 [2024-12-11 15:00:11.781131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:37632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.218 [2024-12-11 15:00:11.781149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:32.218 [2024-12-11 15:00:11.781171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:37648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.218 [2024-12-11 15:00:11.781193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:32.218 [2024-12-11 15:00:11.781216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:36672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.218 [2024-12-11 15:00:11.781233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:32.218 [2024-12-11 15:00:11.781255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:36704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.218 [2024-12-11 15:00:11.781271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:32.218 [2024-12-11 15:00:11.781294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:36736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.218 [2024-12-11 15:00:11.781311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:32.218 [2024-12-11 15:00:11.781333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:36768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.218 [2024-12-11 15:00:11.781350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:32.218 [2024-12-11 15:00:11.781372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:36800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.218 [2024-12-11 15:00:11.781389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:32.218 [2024-12-11 15:00:11.781412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:36832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.218 [2024-12-11 15:00:11.781428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:32.218 [2024-12-11 15:00:11.781450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:36864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.218 [2024-12-11 15:00:11.781466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:32.218 [2024-12-11 15:00:11.781488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:36896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.218 [2024-12-11 15:00:11.781504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:32.218 [2024-12-11 15:00:11.781526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:36928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.218 [2024-12-11 15:00:11.781542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:32.218 [2024-12-11 15:00:11.781578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:36960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.218 [2024-12-11 15:00:11.781594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:32.218 [2024-12-11 15:00:11.781616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:36992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.218 [2024-12-11 15:00:11.781633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:32.218 [2024-12-11 15:00:11.781655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:37024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.218 [2024-12-11 15:00:11.781675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:32.218 [2024-12-11 15:00:11.781698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:37056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.218 [2024-12-11 15:00:11.781715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:32.218 [2024-12-11 15:00:11.781736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:37096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.218 [2024-12-11 15:00:11.781753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:32.218 [2024-12-11 15:00:11.781775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:37128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.218 [2024-12-11 15:00:11.781791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:32.218 [2024-12-11 15:00:11.781813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:37664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.218 [2024-12-11 15:00:11.781829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:32.218 [2024-12-11 15:00:11.781852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:36664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.218 [2024-12-11 15:00:11.781868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:32.218 [2024-12-11 15:00:11.781890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.218 [2024-12-11 15:00:11.781906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:32.218 [2024-12-11 15:00:11.781928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:36728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.218 [2024-12-11 15:00:11.781944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:32.218 [2024-12-11 15:00:11.781967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:36760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.218 [2024-12-11 15:00:11.781983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:32.218 [2024-12-11 15:00:11.782005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:36792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.218 [2024-12-11 15:00:11.782022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:32.218 [2024-12-11 15:00:11.782043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:36824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.218 [2024-12-11 15:00:11.782060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:32.218 [2024-12-11 15:00:11.782081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:36856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.218 [2024-12-11 15:00:11.782097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:32.218 [2024-12-11 15:00:11.782119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:36888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.218 [2024-12-11 15:00:11.782135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:32.218 [2024-12-11 15:00:11.782161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:36920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.218 [2024-12-11 15:00:11.782178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:32.218 [2024-12-11 15:00:11.782200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:36952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.218 [2024-12-11 15:00:11.782217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:32.218 [2024-12-11 15:00:11.782239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:36984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.218 [2024-12-11 15:00:11.782255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:32.218 [2024-12-11 15:00:11.782278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:37016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.218 [2024-12-11 15:00:11.782293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:32.218 [2024-12-11 15:00:11.782315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.218 [2024-12-11 15:00:11.782332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:32.218 [2024-12-11 15:00:11.782354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:37080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.218 [2024-12-11 15:00:11.782370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:32.218 [2024-12-11 15:00:11.782392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:37104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.218 [2024-12-11 15:00:11.782409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:32.218 [2024-12-11 15:00:11.783842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:37144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.218 [2024-12-11 15:00:11.783867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:32.218 [2024-12-11 15:00:11.783895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:37176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.219 [2024-12-11 15:00:11.783912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:32.219 [2024-12-11 15:00:11.783936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:37208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.219 [2024-12-11 15:00:11.783952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:32.219 [2024-12-11 15:00:11.783975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:37240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.219 [2024-12-11 15:00:11.783991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:32.219 [2024-12-11 15:00:11.784013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:37272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.219 [2024-12-11 15:00:11.784029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:32.219 [2024-12-11 15:00:11.784057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:37304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.219 [2024-12-11 15:00:11.784074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:32.219 [2024-12-11 15:00:11.784096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:37336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.219 [2024-12-11 15:00:11.784112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:32.219 [2024-12-11 15:00:11.784134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:37368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.219 [2024-12-11 15:00:11.784150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:32.219 [2024-12-11 15:00:11.784173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:37400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.219 [2024-12-11 15:00:11.784188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:32.219 [2024-12-11 15:00:11.784210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:37432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.219 [2024-12-11 15:00:11.784226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:32.219 [2024-12-11 15:00:11.784248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:37464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.219 [2024-12-11 15:00:11.784264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:32.219 [2024-12-11 15:00:11.784286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:37496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.219 [2024-12-11 15:00:11.784302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:32.219 [2024-12-11 15:00:11.784324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:37528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.219 [2024-12-11 15:00:11.784340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:32.219 [2024-12-11 15:00:11.784362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:37560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.219 [2024-12-11 15:00:11.784378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:32.219 [2024-12-11 15:00:11.784400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:37592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.219 [2024-12-11 15:00:11.784416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:32.219 [2024-12-11 15:00:11.784438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:37152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.219 [2024-12-11 15:00:11.784454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:32.219 [2024-12-11 15:00:11.784476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.219 [2024-12-11 15:00:11.784492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:32.219 [2024-12-11 15:00:11.784514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:37216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.219 [2024-12-11 15:00:11.784538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:32.219 [2024-12-11 15:00:11.784579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:37248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.219 [2024-12-11 15:00:11.784599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:32.219 [2024-12-11 15:00:11.784622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:37280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.219 [2024-12-11 15:00:11.784638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:32.219 [2024-12-11 15:00:11.784660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:37312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.219 [2024-12-11 15:00:11.784677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:32.219 [2024-12-11 15:00:11.784699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:37344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.219 [2024-12-11 15:00:11.784715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:32.219 [2024-12-11 15:00:11.784736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:37376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.219 [2024-12-11 15:00:11.784752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:32.219 [2024-12-11 15:00:11.784774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:37408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.219 [2024-12-11 15:00:11.784789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:32.219 [2024-12-11 15:00:11.784811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:37440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.219 [2024-12-11 15:00:11.784826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:32.219 [2024-12-11 15:00:11.784848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:37472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.219 [2024-12-11 15:00:11.784864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:32.219 [2024-12-11 15:00:11.784885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:37504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.219 [2024-12-11 15:00:11.784901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:32.219 [2024-12-11 15:00:11.784923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:37536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.219 [2024-12-11 15:00:11.784938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:32.219 [2024-12-11 15:00:11.784960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:37568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.219 [2024-12-11 15:00:11.784976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:32.219 [2024-12-11 15:00:11.784997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:37600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.219 [2024-12-11 15:00:11.785018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:32.219 [2024-12-11 15:00:11.785040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:37624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.219 [2024-12-11 15:00:11.785056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:32.219 [2024-12-11 15:00:11.785077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:37632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.219 [2024-12-11 15:00:11.785093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:32.219 [2024-12-11 15:00:11.785115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:36672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.219 [2024-12-11 15:00:11.785131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:32.219 [2024-12-11 15:00:11.785153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:36736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.219 [2024-12-11 15:00:11.785169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:32.219 [2024-12-11 15:00:11.785191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:36800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.219 [2024-12-11 15:00:11.785207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:32.219 [2024-12-11 15:00:11.785228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:36864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.219 [2024-12-11 15:00:11.785245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:32.219 [2024-12-11 15:00:11.785267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:36928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.219 [2024-12-11 15:00:11.785283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:32.219 [2024-12-11 15:00:11.785305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:36992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.219 [2024-12-11 15:00:11.785321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:32.219 [2024-12-11 15:00:11.785343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:37056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.219 [2024-12-11 15:00:11.785359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:32.219 [2024-12-11 15:00:11.785380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:37128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.219 [2024-12-11 15:00:11.785397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:32.219 [2024-12-11 15:00:11.785419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:36664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.219 [2024-12-11 15:00:11.785434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:32.220 [2024-12-11 15:00:11.785456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:36728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.220 [2024-12-11 15:00:11.785472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:32.220 [2024-12-11 15:00:11.785499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:36792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.220 [2024-12-11 15:00:11.785516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:32.220 [2024-12-11 15:00:11.785538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:36856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.220 [2024-12-11 15:00:11.785563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:32.220 [2024-12-11 15:00:11.785587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.220 [2024-12-11 15:00:11.785603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:32.220 [2024-12-11 15:00:11.785625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:36984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.220 [2024-12-11 15:00:11.785641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:32.220 [2024-12-11 15:00:11.785663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:37048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.220 [2024-12-11 15:00:11.785679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:32.220 [2024-12-11 15:00:11.785701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:37104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.220 [2024-12-11 15:00:11.785718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:32.220 [2024-12-11 15:00:11.788101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:36648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.220 [2024-12-11 15:00:11.788127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:32.220 [2024-12-11 15:00:11.788155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:36712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.220 [2024-12-11 15:00:11.788173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:32.220 [2024-12-11 15:00:11.788197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:36776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.220 [2024-12-11 15:00:11.788214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:32.220 [2024-12-11 15:00:11.788236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:36840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.220 [2024-12-11 15:00:11.788252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:32.220 [2024-12-11 15:00:11.788276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:36904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.220 [2024-12-11 15:00:11.788292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:32.220 [2024-12-11 15:00:11.788314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:36968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.220 [2024-12-11 15:00:11.788332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:32.220 [2024-12-11 15:00:11.788360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:37032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.220 [2024-12-11 15:00:11.788377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:32.220 [2024-12-11 15:00:11.788400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:37088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.220 [2024-12-11 15:00:11.788416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:32.220 [2024-12-11 15:00:11.788438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:37672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.220 [2024-12-11 15:00:11.788454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:32.220 [2024-12-11 15:00:11.788476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:37688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.220 [2024-12-11 15:00:11.788492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:32.220 [2024-12-11 15:00:11.788514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:37704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.220 [2024-12-11 15:00:11.788530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:32.220 [2024-12-11 15:00:11.788562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:37720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.220 [2024-12-11 15:00:11.788581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:32.220 [2024-12-11 15:00:11.788602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:37736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.220 [2024-12-11 15:00:11.788619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:32.220 [2024-12-11 15:00:11.788641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:37752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.220 [2024-12-11 15:00:11.788657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:32.220 [2024-12-11 15:00:11.788679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:37768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.220 [2024-12-11 15:00:11.788695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:32.220 [2024-12-11 15:00:11.788717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:37784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.220 [2024-12-11 15:00:11.788733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:32.220 [2024-12-11 15:00:11.788754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:37176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.220 [2024-12-11 15:00:11.788770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:32.220 [2024-12-11 15:00:11.788792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:37240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.220 [2024-12-11 15:00:11.788807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:32.220 [2024-12-11 15:00:11.788834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:37304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.220 [2024-12-11 15:00:11.788850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:32.220 [2024-12-11 15:00:11.788871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:37368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.220 [2024-12-11 15:00:11.788887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:32.220 [2024-12-11 15:00:11.788909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:37432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.220 [2024-12-11 15:00:11.788925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:32.220 [2024-12-11 15:00:11.788947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:37496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.220 [2024-12-11 15:00:11.788963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:32.220 [2024-12-11 15:00:11.788984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:37560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.220 [2024-12-11 15:00:11.789000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:32.220 [2024-12-11 15:00:11.789022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:37152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.220 [2024-12-11 15:00:11.789037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:32.220 [2024-12-11 15:00:11.789059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:37216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.220 [2024-12-11 15:00:11.789074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:32.220 [2024-12-11 15:00:11.789096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.220 [2024-12-11 15:00:11.789112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:32.220 [2024-12-11 15:00:11.789134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:37344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.220 [2024-12-11 15:00:11.789149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:32.220 [2024-12-11 15:00:11.789171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:37408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.220 [2024-12-11 15:00:11.789186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:32.220 [2024-12-11 15:00:11.789208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:37472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.220 [2024-12-11 15:00:11.789225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:32.220 [2024-12-11 15:00:11.789246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:37536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.220 [2024-12-11 15:00:11.789262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:32.220 [2024-12-11 15:00:11.789941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.220 [2024-12-11 15:00:11.789971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:32.220 [2024-12-11 15:00:11.789998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:37632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.220 [2024-12-11 15:00:11.790016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:32.220 [2024-12-11 15:00:11.790039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:36736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.220 [2024-12-11 15:00:11.790056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:32.221 [2024-12-11 15:00:11.790078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:36864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.221 [2024-12-11 15:00:11.790094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:32.221 [2024-12-11 15:00:11.790116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:36992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.221 [2024-12-11 15:00:11.790131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:32.221 [2024-12-11 15:00:11.790153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:37128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.221 [2024-12-11 15:00:11.790168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:32.221 [2024-12-11 15:00:11.790190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:36728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.221 [2024-12-11 15:00:11.790205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:32.221 [2024-12-11 15:00:11.790227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:36856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.221 [2024-12-11 15:00:11.790243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:32.221 [2024-12-11 15:00:11.790264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:36984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.221 [2024-12-11 15:00:11.790281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:32.221 [2024-12-11 15:00:11.790303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:37104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.221 [2024-12-11 15:00:11.790318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:32.221 [2024-12-11 15:00:11.790340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:37168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.221 [2024-12-11 15:00:11.790356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.221 [2024-12-11 15:00:11.790377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:37232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.221 [2024-12-11 15:00:11.790393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.221 [2024-12-11 15:00:11.790415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:37296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.221 [2024-12-11 15:00:11.790435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:32.221 [2024-12-11 15:00:11.790460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:37360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.221 [2024-12-11 15:00:11.790476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:32.221 [2024-12-11 15:00:11.790498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:37424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.221 [2024-12-11 15:00:11.790514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:32.221 [2024-12-11 15:00:11.790537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:37488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.221 [2024-12-11 15:00:11.790570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:32.221 [2024-12-11 15:00:11.790602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:37552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.221 [2024-12-11 15:00:11.790619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:32.221 [2024-12-11 15:00:11.790642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.221 [2024-12-11 15:00:11.790658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:32.221 [2024-12-11 15:00:11.790680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:37808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.221 [2024-12-11 15:00:11.790695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:32.221 [2024-12-11 15:00:11.790717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:37824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.221 [2024-12-11 15:00:11.790734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:32.221 [2024-12-11 15:00:11.790756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:37840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.221 [2024-12-11 15:00:11.790772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:32.221 [2024-12-11 15:00:11.790794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:37856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.221 [2024-12-11 15:00:11.790810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:32.221 [2024-12-11 15:00:11.790832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:37872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.221 [2024-12-11 15:00:11.790848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:32.221 [2024-12-11 15:00:11.790870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:37888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.221 [2024-12-11 15:00:11.790885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:32.221 [2024-12-11 15:00:11.790909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:37616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.221 [2024-12-11 15:00:11.790925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:32.221 [2024-12-11 15:00:11.792226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:37664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.221 [2024-12-11 15:00:11.792252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:32.221 [2024-12-11 15:00:11.792280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:36760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.221 [2024-12-11 15:00:11.792298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:32.221 [2024-12-11 15:00:11.792323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:36888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.221 [2024-12-11 15:00:11.792341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:32.221 [2024-12-11 15:00:11.792363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:37016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.221 [2024-12-11 15:00:11.792380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:32.221 [2024-12-11 15:00:11.792403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:37904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.221 [2024-12-11 15:00:11.792421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:32.221 [2024-12-11 15:00:11.792443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:37920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.221 [2024-12-11 15:00:11.792460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:32.221 [2024-12-11 15:00:11.792481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:37936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.221 [2024-12-11 15:00:11.792497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:32.221 [2024-12-11 15:00:11.792519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.221 [2024-12-11 15:00:11.792535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:32.221 [2024-12-11 15:00:11.792573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:37968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.221 [2024-12-11 15:00:11.792595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:32.221 [2024-12-11 15:00:11.792619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:36712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.221 [2024-12-11 15:00:11.792635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:32.221 [2024-12-11 15:00:11.792657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:36840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.221 [2024-12-11 15:00:11.792673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:32.221 [2024-12-11 15:00:11.792696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:36968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.221 [2024-12-11 15:00:11.792713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:32.221 [2024-12-11 15:00:11.792741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:37088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.221 [2024-12-11 15:00:11.792758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:32.222 [2024-12-11 15:00:11.792780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.222 [2024-12-11 15:00:11.792796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:32.222 [2024-12-11 15:00:11.792818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:37720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.222 [2024-12-11 15:00:11.792835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:32.222 [2024-12-11 15:00:11.792858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:37752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.222 [2024-12-11 15:00:11.792874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:32.222 [2024-12-11 15:00:11.792897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:37784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.222 [2024-12-11 15:00:11.792914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:32.222 [2024-12-11 15:00:11.792936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:37240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.222 [2024-12-11 15:00:11.792952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:32.222 [2024-12-11 15:00:11.792973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:37368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.222 [2024-12-11 15:00:11.792989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:32.222 [2024-12-11 15:00:11.793012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:37496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.222 [2024-12-11 15:00:11.793027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:32.222 [2024-12-11 15:00:11.793051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:37152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.222 [2024-12-11 15:00:11.793067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:32.222 [2024-12-11 15:00:11.793088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:37280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.222 [2024-12-11 15:00:11.793104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:32.222 [2024-12-11 15:00:11.793125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:37408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.222 [2024-12-11 15:00:11.793142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:32.222 [2024-12-11 15:00:11.793163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:37536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.222 [2024-12-11 15:00:11.793178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:32.222 [2024-12-11 15:00:11.793200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:37696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.222 [2024-12-11 15:00:11.793220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:32.222 [2024-12-11 15:00:11.793243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:37728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.222 [2024-12-11 15:00:11.793260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:32.222 [2024-12-11 15:00:11.793282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:37760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.222 [2024-12-11 15:00:11.793298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:32.222 [2024-12-11 15:00:11.793319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:37632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.222 [2024-12-11 15:00:11.793335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:32.222 [2024-12-11 15:00:11.793358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:36864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.222 [2024-12-11 15:00:11.793374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:32.222 [2024-12-11 15:00:11.793396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:37128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.222 [2024-12-11 15:00:11.793412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:32.222 [2024-12-11 15:00:11.793434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:36856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.222 [2024-12-11 15:00:11.793450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:32.222 [2024-12-11 15:00:11.793471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:37104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.222 [2024-12-11 15:00:11.793487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:32.222 [2024-12-11 15:00:11.793509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:37232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.222 [2024-12-11 15:00:11.793525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:32.222 [2024-12-11 15:00:11.793557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:37360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.222 [2024-12-11 15:00:11.793576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:32.222 [2024-12-11 15:00:11.793600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:37488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.222 [2024-12-11 15:00:11.793617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:32.222 [2024-12-11 15:00:11.793639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:37792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.222 [2024-12-11 15:00:11.793655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:32.222 [2024-12-11 15:00:11.793678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:37824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.222 [2024-12-11 15:00:11.793698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:32.222 [2024-12-11 15:00:11.793722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:37856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.222 [2024-12-11 15:00:11.793739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:32.222 [2024-12-11 15:00:11.793761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:37888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.222 [2024-12-11 15:00:11.793777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:32.222 [2024-12-11 15:00:11.795960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:37184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.222 [2024-12-11 15:00:11.795987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:32.222 [2024-12-11 15:00:11.796016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:37312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.222 [2024-12-11 15:00:11.796034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:32.222 [2024-12-11 15:00:11.796057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:37440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.222 [2024-12-11 15:00:11.796074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:32.222 [2024-12-11 15:00:11.796096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:37568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.222 [2024-12-11 15:00:11.796113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:32.222 [2024-12-11 15:00:11.796135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:37976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.222 [2024-12-11 15:00:11.796151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:32.222 [2024-12-11 15:00:11.796173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:37992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.222 [2024-12-11 15:00:11.796189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:32.222 [2024-12-11 15:00:11.796211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:38008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.222 [2024-12-11 15:00:11.796227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:32.222 [2024-12-11 15:00:11.796249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:38024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.222 [2024-12-11 15:00:11.796266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:32.222 [2024-12-11 15:00:11.796287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:38040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.222 [2024-12-11 15:00:11.796304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:32.222 [2024-12-11 15:00:11.796340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:38056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.222 [2024-12-11 15:00:11.796356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:32.222 [2024-12-11 15:00:11.796384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:38072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.222 [2024-12-11 15:00:11.796416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:32.222 [2024-12-11 15:00:11.796440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:38088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.222 [2024-12-11 15:00:11.796456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:32.222 [2024-12-11 15:00:11.796478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:38104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.222 [2024-12-11 15:00:11.796494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:32.222 [2024-12-11 15:00:11.796517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:36920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.222 [2024-12-11 15:00:11.796533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:32.223 [2024-12-11 15:00:11.796570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:36760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.223 [2024-12-11 15:00:11.796592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:32.223 [2024-12-11 15:00:11.796617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:37016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.223 [2024-12-11 15:00:11.796633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:32.223 [2024-12-11 15:00:11.796656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:37920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.223 [2024-12-11 15:00:11.796671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:32.223 [2024-12-11 15:00:11.796693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:37952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.223 [2024-12-11 15:00:11.796709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:32.223 [2024-12-11 15:00:11.796732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:36712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.223 [2024-12-11 15:00:11.796747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:32.223 [2024-12-11 15:00:11.796769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:36968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.223 [2024-12-11 15:00:11.796785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:32.223 [2024-12-11 15:00:11.796807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:37688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.223 [2024-12-11 15:00:11.796823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:32.223 [2024-12-11 15:00:11.796844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:37752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.223 [2024-12-11 15:00:11.796860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:32.223 [2024-12-11 15:00:11.796887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:37240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.223 [2024-12-11 15:00:11.796903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:32.223 [2024-12-11 15:00:11.796926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:37496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.223 [2024-12-11 15:00:11.796943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:32.223 [2024-12-11 15:00:11.796965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:37280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.223 [2024-12-11 15:00:11.796980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:32.223 [2024-12-11 15:00:11.797002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:37536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.223 [2024-12-11 15:00:11.797018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:32.223 [2024-12-11 15:00:11.797056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:37728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.223 [2024-12-11 15:00:11.797072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:32.223 [2024-12-11 15:00:11.798267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:37632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.223 [2024-12-11 15:00:11.798293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:32.223 [2024-12-11 15:00:11.798321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:37128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.223 [2024-12-11 15:00:11.798340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:32.223 [2024-12-11 15:00:11.798363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:37104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.223 [2024-12-11 15:00:11.798380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:32.223 [2024-12-11 15:00:11.798402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:37360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.223 [2024-12-11 15:00:11.798418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:32.223 [2024-12-11 15:00:11.798440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:37792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.223 [2024-12-11 15:00:11.798456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:32.223 [2024-12-11 15:00:11.798478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:37856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.223 [2024-12-11 15:00:11.798493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:32.223 [2024-12-11 15:00:11.798515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:37800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.223 [2024-12-11 15:00:11.798532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:32.223 [2024-12-11 15:00:11.798562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:37832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.223 [2024-12-11 15:00:11.798585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:32.223 [2024-12-11 15:00:11.798609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:37864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.223 [2024-12-11 15:00:11.798625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:32.223 [2024-12-11 15:00:11.798647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:38120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.223 [2024-12-11 15:00:11.798664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:32.223 [2024-12-11 15:00:11.798686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:38136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.223 [2024-12-11 15:00:11.798702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:32.223 [2024-12-11 15:00:11.798724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:37880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.223 [2024-12-11 15:00:11.798740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:32.223 [2024-12-11 15:00:11.798763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:37912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.223 [2024-12-11 15:00:11.798779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:32.223 [2024-12-11 15:00:11.798801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:37944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.223 [2024-12-11 15:00:11.798817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:32.223 [2024-12-11 15:00:11.798839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:37672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.223 [2024-12-11 15:00:11.798870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:32.223 [2024-12-11 15:00:11.798893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:38152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.223 [2024-12-11 15:00:11.798908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:32.223 [2024-12-11 15:00:11.798929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:38168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.223 [2024-12-11 15:00:11.798960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:32.223 [2024-12-11 15:00:11.798982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:38184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.223 [2024-12-11 15:00:11.798998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:32.223 [2024-12-11 15:00:11.799020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.223 [2024-12-11 15:00:11.799036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:32.223 [2024-12-11 15:00:11.799057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:38216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.223 [2024-12-11 15:00:11.799077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:32.223 [2024-12-11 15:00:11.799101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:38232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.223 [2024-12-11 15:00:11.799117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:32.223 [2024-12-11 15:00:11.799139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:38248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.223 [2024-12-11 15:00:11.799155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:32.223 [2024-12-11 15:00:11.799176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:38264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.223 [2024-12-11 15:00:11.799192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:32.223 [2024-12-11 15:00:11.799214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:38280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.223 [2024-12-11 15:00:11.799230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:32.223 [2024-12-11 15:00:11.799252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:37768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.223 [2024-12-11 15:00:11.799267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:32.223 [2024-12-11 15:00:11.799290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:37344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.223 [2024-12-11 15:00:11.799306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:32.223 [2024-12-11 15:00:11.799328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:37600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.224 [2024-12-11 15:00:11.799344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:32.224 [2024-12-11 15:00:11.799366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:37312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.224 [2024-12-11 15:00:11.799382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:32.224 [2024-12-11 15:00:11.799403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:37568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.224 [2024-12-11 15:00:11.799419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:32.224 [2024-12-11 15:00:11.799441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:37992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.224 [2024-12-11 15:00:11.799457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:32.224 [2024-12-11 15:00:11.799478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:38024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.224 [2024-12-11 15:00:11.799494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:32.224 [2024-12-11 15:00:11.799516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:38056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.224 [2024-12-11 15:00:11.799531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:32.224 [2024-12-11 15:00:11.799569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.224 [2024-12-11 15:00:11.799587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:32.224 [2024-12-11 15:00:11.799609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:36920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.224 [2024-12-11 15:00:11.799625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:32.224 [2024-12-11 15:00:11.799647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:37016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.224 [2024-12-11 15:00:11.799663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:32.224 [2024-12-11 15:00:11.799684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.224 [2024-12-11 15:00:11.799700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:32.224 [2024-12-11 15:00:11.799722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:36968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.224 [2024-12-11 15:00:11.799738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:32.224 [2024-12-11 15:00:11.799759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:37752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.224 [2024-12-11 15:00:11.799775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:32.224 [2024-12-11 15:00:11.799797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:37496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.224 [2024-12-11 15:00:11.799813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:32.224 [2024-12-11 15:00:11.799835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:37536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.224 [2024-12-11 15:00:11.799852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:32.224 [2024-12-11 15:00:11.802220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:36728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.224 [2024-12-11 15:00:11.802261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:32.224 [2024-12-11 15:00:11.802291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:37808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.224 [2024-12-11 15:00:11.802309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:32.224 [2024-12-11 15:00:11.802332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:37872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.224 [2024-12-11 15:00:11.802348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:32.224 [2024-12-11 15:00:11.802370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:38296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.224 [2024-12-11 15:00:11.802386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:32.224 [2024-12-11 15:00:11.802414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:38312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.224 [2024-12-11 15:00:11.802431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:32.224 [2024-12-11 15:00:11.802453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:38328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.224 [2024-12-11 15:00:11.802469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:32.224 [2024-12-11 15:00:11.802491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:38344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.224 [2024-12-11 15:00:11.802507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:32.224 [2024-12-11 15:00:11.802529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:38360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.224 [2024-12-11 15:00:11.802552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.224 [2024-12-11 15:00:11.802578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:38376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.224 [2024-12-11 15:00:11.802596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.224 [2024-12-11 15:00:11.802617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:38392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.224 [2024-12-11 15:00:11.802633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:32.224 [2024-12-11 15:00:11.802655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:37128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.224 [2024-12-11 15:00:11.802671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:32.224 [2024-12-11 15:00:11.802693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:37360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.224 [2024-12-11 15:00:11.802710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:32.224 [2024-12-11 15:00:11.802732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:37856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.224 [2024-12-11 15:00:11.802748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:32.224 [2024-12-11 15:00:11.802769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:37832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.224 [2024-12-11 15:00:11.802784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:32.224 [2024-12-11 15:00:11.802807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.224 [2024-12-11 15:00:11.802823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:32.224 [2024-12-11 15:00:11.802861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:37880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.224 [2024-12-11 15:00:11.802877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:32.224 [2024-12-11 15:00:11.802898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:37944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.224 [2024-12-11 15:00:11.802934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:32.224 [2024-12-11 15:00:11.802958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.224 [2024-12-11 15:00:11.802975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:32.224 [2024-12-11 15:00:11.802996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:38184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.224 [2024-12-11 15:00:11.803012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:32.224 [2024-12-11 15:00:11.803034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:38216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.224 [2024-12-11 15:00:11.803051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:32.224 [2024-12-11 15:00:11.803072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:38248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.224 [2024-12-11 15:00:11.803089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:32.224 [2024-12-11 15:00:11.803111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:38280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.224 [2024-12-11 15:00:11.803127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:32.224 [2024-12-11 15:00:11.803149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:37344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.224 [2024-12-11 15:00:11.803165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:32.224 [2024-12-11 15:00:11.803187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:37312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.224 [2024-12-11 15:00:11.803203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:32.224 [2024-12-11 15:00:11.803225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:37992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.224 [2024-12-11 15:00:11.803241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:32.224 [2024-12-11 15:00:11.804500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:38056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.224 [2024-12-11 15:00:11.804525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:32.224 [2024-12-11 15:00:11.804565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:36920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.225 [2024-12-11 15:00:11.804593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:32.225 [2024-12-11 15:00:11.804622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.225 [2024-12-11 15:00:11.804639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:32.225 [2024-12-11 15:00:11.804662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:37752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.225 [2024-12-11 15:00:11.804683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:32.225 [2024-12-11 15:00:11.804706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:37536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.225 [2024-12-11 15:00:11.804723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:32.225 [2024-12-11 15:00:11.804745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:38000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.225 [2024-12-11 15:00:11.804761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:32.225 [2024-12-11 15:00:11.804783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:38032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.225 [2024-12-11 15:00:11.804799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:32.225 [2024-12-11 15:00:11.804821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:38064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.225 [2024-12-11 15:00:11.804837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:32.225 [2024-12-11 15:00:11.804874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:38096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.225 [2024-12-11 15:00:11.804890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:32.225 [2024-12-11 15:00:11.804911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:37936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.225 [2024-12-11 15:00:11.804942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:32.225 [2024-12-11 15:00:11.804965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:37720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.225 [2024-12-11 15:00:11.804981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:32.225 [2024-12-11 15:00:11.805003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:37152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.225 [2024-12-11 15:00:11.805018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:32.225 [2024-12-11 15:00:11.805040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.225 [2024-12-11 15:00:11.805057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:32.225 [2024-12-11 15:00:11.805078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:38424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.225 [2024-12-11 15:00:11.805094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:32.225 [2024-12-11 15:00:11.805116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:36856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.225 [2024-12-11 15:00:11.805132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:32.225 [2024-12-11 15:00:11.805154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:37888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.225 [2024-12-11 15:00:11.805171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:32.225 [2024-12-11 15:00:11.805197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:38128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.225 [2024-12-11 15:00:11.805214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:32.225 [2024-12-11 15:00:11.805236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:38160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.225 [2024-12-11 15:00:11.805253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:32.225 [2024-12-11 15:00:11.805290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:38440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.225 [2024-12-11 15:00:11.805306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:32.225 [2024-12-11 15:00:11.805327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:38456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.225 [2024-12-11 15:00:11.805342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:32.225 [2024-12-11 15:00:11.805381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:38472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.225 [2024-12-11 15:00:11.805397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:32.225 [2024-12-11 15:00:11.805419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:38488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.225 [2024-12-11 15:00:11.805436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:32.225 [2024-12-11 15:00:11.805458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:38504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.225 [2024-12-11 15:00:11.805474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:32.225 [2024-12-11 15:00:11.805496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:38520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.225 [2024-12-11 15:00:11.805512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:32.225 [2024-12-11 15:00:11.805534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.225 [2024-12-11 15:00:11.805560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:32.225 [2024-12-11 15:00:11.806228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:38552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.225 [2024-12-11 15:00:11.806254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:32.225 [2024-12-11 15:00:11.806282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:38568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.225 [2024-12-11 15:00:11.806301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:32.225 [2024-12-11 15:00:11.806324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:38192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.225 [2024-12-11 15:00:11.806340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:32.225 [2024-12-11 15:00:11.806369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:38224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.225 [2024-12-11 15:00:11.806401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:32.225 [2024-12-11 15:00:11.806424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:38256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.225 [2024-12-11 15:00:11.806440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:32.225 [2024-12-11 15:00:11.806477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:37808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.225 [2024-12-11 15:00:11.806493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:32.225 [2024-12-11 15:00:11.806515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:38296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.225 [2024-12-11 15:00:11.806532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:32.225 [2024-12-11 15:00:11.806563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:38328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.225 [2024-12-11 15:00:11.806581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:32.225 [2024-12-11 15:00:11.806603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:38360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.225 [2024-12-11 15:00:11.806620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:32.225 [2024-12-11 15:00:11.806643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:38392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.225 [2024-12-11 15:00:11.806659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:32.225 [2024-12-11 15:00:11.806680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:37360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.225 [2024-12-11 15:00:11.806697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:32.225 [2024-12-11 15:00:11.806719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:37832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.225 [2024-12-11 15:00:11.806735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:32.225 [2024-12-11 15:00:11.806756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:37880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.225 [2024-12-11 15:00:11.806773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:32.225 [2024-12-11 15:00:11.806794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.225 [2024-12-11 15:00:11.806811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:32.225 [2024-12-11 15:00:11.806833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.225 [2024-12-11 15:00:11.806849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:32.225 [2024-12-11 15:00:11.806871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.225 [2024-12-11 15:00:11.806891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:32.225 [2024-12-11 15:00:11.806915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:37312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.226 [2024-12-11 15:00:11.806932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:32.226 [2024-12-11 15:00:11.807269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:37976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.226 [2024-12-11 15:00:11.807292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:32.226 [2024-12-11 15:00:11.807319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:38040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.226 [2024-12-11 15:00:11.807338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:32.226 [2024-12-11 15:00:11.807361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:38104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.226 [2024-12-11 15:00:11.807377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:32.226 [2024-12-11 15:00:11.807399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:37688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.226 [2024-12-11 15:00:11.807416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:32.226 [2024-12-11 15:00:11.807438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:38584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.226 [2024-12-11 15:00:11.807454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:32.226 [2024-12-11 15:00:11.807476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:38600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.226 [2024-12-11 15:00:11.807493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:32.226 [2024-12-11 15:00:11.807515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:38616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.226 [2024-12-11 15:00:11.807531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:32.226 [2024-12-11 15:00:11.807562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:38632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.226 [2024-12-11 15:00:11.807581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:32.226 [2024-12-11 15:00:11.807603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:38648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.226 [2024-12-11 15:00:11.807619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:32.226 [2024-12-11 15:00:11.807641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:38304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.226 [2024-12-11 15:00:11.807658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:32.226 [2024-12-11 15:00:11.807679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:38336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.226 [2024-12-11 15:00:11.807700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:32.226 [2024-12-11 15:00:11.807724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:38368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.226 [2024-12-11 15:00:11.807740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:32.226 [2024-12-11 15:00:11.807762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:38400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.226 [2024-12-11 15:00:11.807779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:32.226 [2024-12-11 15:00:11.807801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:37104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.226 [2024-12-11 15:00:11.807817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:32.226 [2024-12-11 15:00:11.807854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:36920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.226 [2024-12-11 15:00:11.807870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:32.226 [2024-12-11 15:00:11.807907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:37752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.226 [2024-12-11 15:00:11.807923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:32.226 [2024-12-11 15:00:11.807946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:38000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.226 [2024-12-11 15:00:11.807962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:32.226 [2024-12-11 15:00:11.807983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:38064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.226 [2024-12-11 15:00:11.808000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:32.226 [2024-12-11 15:00:11.808022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:37936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.226 [2024-12-11 15:00:11.808038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:32.226 [2024-12-11 15:00:11.808060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:37152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.226 [2024-12-11 15:00:11.808075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:32.226 [2024-12-11 15:00:11.808097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:38424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.226 [2024-12-11 15:00:11.808113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:32.226 [2024-12-11 15:00:11.808135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:37888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.226 [2024-12-11 15:00:11.808150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:32.226 [2024-12-11 15:00:11.808172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:38160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.226 [2024-12-11 15:00:11.808206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:32.226 [2024-12-11 15:00:11.808231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:38456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.226 [2024-12-11 15:00:11.808246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:32.226 [2024-12-11 15:00:11.808267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:38488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.226 [2024-12-11 15:00:11.808283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:32.226 [2024-12-11 15:00:11.808322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:38520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.226 [2024-12-11 15:00:11.808339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:32.226 [2024-12-11 15:00:11.809763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:38136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.226 [2024-12-11 15:00:11.809789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:32.226 [2024-12-11 15:00:11.809818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:38200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.226 [2024-12-11 15:00:11.809836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:32.226 [2024-12-11 15:00:11.809859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:38656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.226 [2024-12-11 15:00:11.809876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:32.226 [2024-12-11 15:00:11.809898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:38264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.226 [2024-12-11 15:00:11.809914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:32.226 [2024-12-11 15:00:11.809937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:38088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.226 [2024-12-11 15:00:11.809968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:32.226 [2024-12-11 15:00:11.809991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:38568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.226 [2024-12-11 15:00:11.810007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:32.226 [2024-12-11 15:00:11.810029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:38224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.226 [2024-12-11 15:00:11.810044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:32.226 [2024-12-11 15:00:11.810066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:37808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.226 [2024-12-11 15:00:11.810082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:32.226 [2024-12-11 15:00:11.810103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.227 [2024-12-11 15:00:11.810118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:32.227 [2024-12-11 15:00:11.810145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.227 [2024-12-11 15:00:11.810161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:32.227 [2024-12-11 15:00:11.810182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:37832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.227 [2024-12-11 15:00:11.810198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:32.227 [2024-12-11 15:00:11.810219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:38152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.227 [2024-12-11 15:00:11.810235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:32.227 [2024-12-11 15:00:11.810256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:38280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.227 [2024-12-11 15:00:11.810272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:32.227 [2024-12-11 15:00:11.810294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:38040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.227 [2024-12-11 15:00:11.810310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:32.227 [2024-12-11 15:00:11.810331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:37688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.227 [2024-12-11 15:00:11.810346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:32.227 [2024-12-11 15:00:11.810368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:38600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.227 [2024-12-11 15:00:11.810400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:32.227 [2024-12-11 15:00:11.810423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:38632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.227 [2024-12-11 15:00:11.810439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:32.227 [2024-12-11 15:00:11.810460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:38304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.227 [2024-12-11 15:00:11.810477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:32.227 [2024-12-11 15:00:11.810499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:38368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.227 [2024-12-11 15:00:11.810515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:32.227 [2024-12-11 15:00:11.810537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:37104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.227 [2024-12-11 15:00:11.810560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:32.227 [2024-12-11 15:00:11.810584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:37752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.227 [2024-12-11 15:00:11.810601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:32.227 [2024-12-11 15:00:11.810627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:38064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.227 [2024-12-11 15:00:11.810644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:32.227 [2024-12-11 15:00:11.810666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:37152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.227 [2024-12-11 15:00:11.810683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:32.227 [2024-12-11 15:00:11.810705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:37888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.227 [2024-12-11 15:00:11.810721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:32.227 [2024-12-11 15:00:11.810743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.227 [2024-12-11 15:00:11.810759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:32.227 [2024-12-11 15:00:11.810782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:38520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.227 [2024-12-11 15:00:11.810798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:32.227 [2024-12-11 15:00:11.812184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:38432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.227 [2024-12-11 15:00:11.812210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:32.227 [2024-12-11 15:00:11.812238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:38680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.227 [2024-12-11 15:00:11.812257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:32.227 [2024-12-11 15:00:11.812295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:38696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.227 [2024-12-11 15:00:11.812312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:32.227 [2024-12-11 15:00:11.812334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:38712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.227 [2024-12-11 15:00:11.812350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:32.227 [2024-12-11 15:00:11.812387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:38728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.227 [2024-12-11 15:00:11.812404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:32.227 [2024-12-11 15:00:11.812426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:38744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.227 [2024-12-11 15:00:11.812442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:32.227 [2024-12-11 15:00:11.812464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:38760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.227 [2024-12-11 15:00:11.812480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:32.227 [2024-12-11 15:00:11.812503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:38776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.227 [2024-12-11 15:00:11.812524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:32.227 [2024-12-11 15:00:11.813081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:38792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.227 [2024-12-11 15:00:11.813107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:32.227 [2024-12-11 15:00:11.813135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:38808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.227 [2024-12-11 15:00:11.813153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:32.227 [2024-12-11 15:00:11.813175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:38448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.227 [2024-12-11 15:00:11.813191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:32.227 [2024-12-11 15:00:11.813213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:38480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.227 [2024-12-11 15:00:11.813229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:32.227 [2024-12-11 15:00:11.813251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:38512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.227 [2024-12-11 15:00:11.813267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:32.227 [2024-12-11 15:00:11.813290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:38544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.227 [2024-12-11 15:00:11.813305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:32.227 [2024-12-11 15:00:11.813328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:38576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.227 [2024-12-11 15:00:11.813344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:32.227 [2024-12-11 15:00:11.813366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:38200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.227 [2024-12-11 15:00:11.813382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:32.227 [2024-12-11 15:00:11.813403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:38264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.227 [2024-12-11 15:00:11.813419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.227 [2024-12-11 15:00:11.813442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:38568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.227 [2024-12-11 15:00:11.813458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.227 [2024-12-11 15:00:11.813479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:37808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.227 [2024-12-11 15:00:11.813495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:32.227 [2024-12-11 15:00:11.813517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:38392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.227 [2024-12-11 15:00:11.813538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:32.227 [2024-12-11 15:00:11.813574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:38152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.227 [2024-12-11 15:00:11.813591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:32.227 [2024-12-11 15:00:11.813613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:38040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.227 [2024-12-11 15:00:11.813629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:32.227 [2024-12-11 15:00:11.813651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:38600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.228 [2024-12-11 15:00:11.813667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:32.228 [2024-12-11 15:00:11.813690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:38304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.228 [2024-12-11 15:00:11.813706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:32.228 [2024-12-11 15:00:11.813728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:37104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.228 [2024-12-11 15:00:11.813745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:32.228 [2024-12-11 15:00:11.813766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:38064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.228 [2024-12-11 15:00:11.813783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:32.228 [2024-12-11 15:00:11.813805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:37888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.228 [2024-12-11 15:00:11.813821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:32.228 [2024-12-11 15:00:11.813842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:38520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.228 [2024-12-11 15:00:11.813858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:32.228 [2024-12-11 15:00:11.813897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:38344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.228 [2024-12-11 15:00:11.813913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:32.228 [2024-12-11 15:00:11.813934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:37856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.228 [2024-12-11 15:00:11.813949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:32.228 [2024-12-11 15:00:11.813970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:38184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.228 [2024-12-11 15:00:11.813986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:32.228 [2024-12-11 15:00:11.814023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.228 [2024-12-11 15:00:11.814040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:32.228 [2024-12-11 15:00:11.814066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.228 [2024-12-11 15:00:11.814082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:32.228 [2024-12-11 15:00:11.814105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.228 [2024-12-11 15:00:11.814121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:32.228 [2024-12-11 15:00:11.814142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.228 [2024-12-11 15:00:11.814159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:32.228 [2024-12-11 15:00:11.814181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:38896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.228 [2024-12-11 15:00:11.814196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:32.228 [2024-12-11 15:00:11.814220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:37992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.228 [2024-12-11 15:00:11.814236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:32.228 [2024-12-11 15:00:11.815531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:38592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.228 [2024-12-11 15:00:11.815565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:32.228 [2024-12-11 15:00:11.815594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:38624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.228 [2024-12-11 15:00:11.815613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:32.228 [2024-12-11 15:00:11.815636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:38056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.228 [2024-12-11 15:00:11.815653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:32.228 [2024-12-11 15:00:11.815674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:37536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.228 [2024-12-11 15:00:11.815690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:32.228 [2024-12-11 15:00:11.815713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:38680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.228 [2024-12-11 15:00:11.815729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:32.228 [2024-12-11 15:00:11.815751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:38712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.228 [2024-12-11 15:00:11.815766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:32.228 [2024-12-11 15:00:11.815788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:38744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.228 [2024-12-11 15:00:11.815804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:32.228 [2024-12-11 15:00:11.815831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:38776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.228 [2024-12-11 15:00:11.815848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:32.228 [2024-12-11 15:00:11.815870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:38912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.228 [2024-12-11 15:00:11.815887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:32.228 [2024-12-11 15:00:11.815908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:38928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.228 [2024-12-11 15:00:11.815924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:32.228 [2024-12-11 15:00:11.815946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:38440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.228 [2024-12-11 15:00:11.815962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:32.228 [2024-12-11 15:00:11.815984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:38504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.228 [2024-12-11 15:00:11.816000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:32.228 [2024-12-11 15:00:11.816022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:38664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.228 [2024-12-11 15:00:11.816038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:32.228 [2024-12-11 15:00:11.816060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:38808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.228 [2024-12-11 15:00:11.816076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:32.228 [2024-12-11 15:00:11.816098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:38480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.228 [2024-12-11 15:00:11.816114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:32.228 [2024-12-11 15:00:11.816136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:38544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.228 [2024-12-11 15:00:11.816152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:32.228 [2024-12-11 15:00:11.816175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:38200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.228 [2024-12-11 15:00:11.816190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:32.228 [2024-12-11 15:00:11.816212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.228 [2024-12-11 15:00:11.816228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:32.228 [2024-12-11 15:00:11.816250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.228 [2024-12-11 15:00:11.816265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:32.228 [2024-12-11 15:00:11.816287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:38040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.228 [2024-12-11 15:00:11.816309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:32.228 [2024-12-11 15:00:11.816348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:38304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.228 [2024-12-11 15:00:11.816364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:32.228 [2024-12-11 15:00:11.816386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:38064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.228 [2024-12-11 15:00:11.816417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:32.228 [2024-12-11 15:00:11.816440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:38520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.228 [2024-12-11 15:00:11.816456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:32.228 [2024-12-11 15:00:11.816478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:37856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.228 [2024-12-11 15:00:11.816494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:32.228 [2024-12-11 15:00:11.816516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:38832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.228 [2024-12-11 15:00:11.816532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:32.229 [2024-12-11 15:00:11.816569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:38864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.229 [2024-12-11 15:00:11.816591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:32.229 [2024-12-11 15:00:11.816616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:38896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.229 [2024-12-11 15:00:11.816633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:32.229 [2024-12-11 15:00:11.817979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:38296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.229 [2024-12-11 15:00:11.818005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:32.229 [2024-12-11 15:00:11.818033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:38944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.229 [2024-12-11 15:00:11.818051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:32.229 [2024-12-11 15:00:11.818074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:38960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.229 [2024-12-11 15:00:11.818105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:32.229 [2024-12-11 15:00:11.818128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:38976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.229 [2024-12-11 15:00:11.818144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:32.229 [2024-12-11 15:00:11.818182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:38992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.229 [2024-12-11 15:00:11.818203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:32.229 [2024-12-11 15:00:11.818226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:39008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.229 [2024-12-11 15:00:11.818243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:32.229 [2024-12-11 15:00:11.818264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:39024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.229 [2024-12-11 15:00:11.818281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:32.229 [2024-12-11 15:00:11.818303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:39040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.229 [2024-12-11 15:00:11.818319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:32.229 [2024-12-11 15:00:11.819180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:39056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.229 [2024-12-11 15:00:11.819206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:32.229 [2024-12-11 15:00:11.819234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:39072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.229 [2024-12-11 15:00:11.819251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:32.229 [2024-12-11 15:00:11.819274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:38216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.229 [2024-12-11 15:00:11.819290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:32.229 [2024-12-11 15:00:11.819312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:38616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.229 [2024-12-11 15:00:11.819328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:32.229 [2024-12-11 15:00:11.819350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:38624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.229 [2024-12-11 15:00:11.819366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:32.229 [2024-12-11 15:00:11.819388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:37536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.229 [2024-12-11 15:00:11.819404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:32.229 [2024-12-11 15:00:11.819426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:38712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.229 [2024-12-11 15:00:11.819442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:32.229 [2024-12-11 15:00:11.819463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:38776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.229 [2024-12-11 15:00:11.819479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:32.229 [2024-12-11 15:00:11.819501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:38928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.229 [2024-12-11 15:00:11.819517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:32.229 [2024-12-11 15:00:11.819551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:38504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.229 [2024-12-11 15:00:11.819569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:32.229 [2024-12-11 15:00:11.819592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:38808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.229 [2024-12-11 15:00:11.819609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:32.229 [2024-12-11 15:00:11.819631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:38544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.229 [2024-12-11 15:00:11.819647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:32.229 [2024-12-11 15:00:11.819668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.229 [2024-12-11 15:00:11.819684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:32.229 [2024-12-11 15:00:11.819706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:38040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.229 [2024-12-11 15:00:11.819723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:32.229 [2024-12-11 15:00:11.819745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:38064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.229 [2024-12-11 15:00:11.819761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:32.229 [2024-12-11 15:00:11.819782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:37856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.229 [2024-12-11 15:00:11.819798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:32.229 [2024-12-11 15:00:11.819820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:38864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.229 [2024-12-11 15:00:11.819835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:32.229 [2024-12-11 15:00:11.819857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:38424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.229 [2024-12-11 15:00:11.819874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:32.229 [2024-12-11 15:00:11.819896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:39088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.229 [2024-12-11 15:00:11.819912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:32.229 [2024-12-11 15:00:11.819934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:39104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.229 [2024-12-11 15:00:11.819950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:32.229 [2024-12-11 15:00:11.819972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:39120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.229 [2024-12-11 15:00:11.819987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:32.229 [2024-12-11 15:00:11.820014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:39136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.229 [2024-12-11 15:00:11.820031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:32.229 [2024-12-11 15:00:11.820052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.229 [2024-12-11 15:00:11.820068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:32.229 [2024-12-11 15:00:11.820090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:38688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.229 [2024-12-11 15:00:11.820106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:32.229 [2024-12-11 15:00:11.820128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:38720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.229 [2024-12-11 15:00:11.820144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:32.229 [2024-12-11 15:00:11.820166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:38752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.229 [2024-12-11 15:00:11.820182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:32.229 [2024-12-11 15:00:11.820204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:38784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.229 [2024-12-11 15:00:11.820220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:32.229 [2024-12-11 15:00:11.820258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:38816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.229 [2024-12-11 15:00:11.820274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:32.229 [2024-12-11 15:00:11.820295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:38328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.229 [2024-12-11 15:00:11.820311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:32.229 [2024-12-11 15:00:11.820332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:38632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.229 [2024-12-11 15:00:11.820347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:32.230 [2024-12-11 15:00:11.820368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:38944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.230 [2024-12-11 15:00:11.820399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:32.230 [2024-12-11 15:00:11.820422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.230 [2024-12-11 15:00:11.820438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:32.230 [2024-12-11 15:00:11.820460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:39008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.230 [2024-12-11 15:00:11.820475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:32.230 [2024-12-11 15:00:11.820497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:39040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.230 [2024-12-11 15:00:11.820517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:32.230 [2024-12-11 15:00:11.822555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:39168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.230 [2024-12-11 15:00:11.822581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:32.230 [2024-12-11 15:00:11.822609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:39184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.230 [2024-12-11 15:00:11.822627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:32.230 [2024-12-11 15:00:11.822650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.230 [2024-12-11 15:00:11.822667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:32.230 [2024-12-11 15:00:11.822689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:39216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.230 [2024-12-11 15:00:11.822706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:32.230 [2024-12-11 15:00:11.822728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:39232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.230 [2024-12-11 15:00:11.822744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:32.230 [2024-12-11 15:00:11.822767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:39248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.230 [2024-12-11 15:00:11.822783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:32.230 [2024-12-11 15:00:11.822805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:38824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.230 [2024-12-11 15:00:11.822821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:32.230 [2024-12-11 15:00:11.822844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:38856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.230 [2024-12-11 15:00:11.822860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:32.230 [2024-12-11 15:00:11.822883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:38888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.230 [2024-12-11 15:00:11.822899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:32.230 [2024-12-11 15:00:11.822922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:39072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.230 [2024-12-11 15:00:11.822938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:32.230 [2024-12-11 15:00:11.822960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:38616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.230 [2024-12-11 15:00:11.822976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:32.230 [2024-12-11 15:00:11.822998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:37536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.230 [2024-12-11 15:00:11.823019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:32.230 [2024-12-11 15:00:11.823042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:38776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.230 [2024-12-11 15:00:11.823074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:32.230 [2024-12-11 15:00:11.823097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:38504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.230 [2024-12-11 15:00:11.823112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:32.230 [2024-12-11 15:00:11.823133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:38544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.230 [2024-12-11 15:00:11.823149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:32.230 [2024-12-11 15:00:11.823170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:38040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.230 [2024-12-11 15:00:11.823186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:32.230 [2024-12-11 15:00:11.823207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:37856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.230 [2024-12-11 15:00:11.823223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:32.230 [2024-12-11 15:00:11.823244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:38424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.230 [2024-12-11 15:00:11.823259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:32.230 [2024-12-11 15:00:11.823281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:39104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.230 [2024-12-11 15:00:11.823312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:32.230 [2024-12-11 15:00:11.823335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:39136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.230 [2024-12-11 15:00:11.823351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:32.230 [2024-12-11 15:00:11.823373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:38688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.230 [2024-12-11 15:00:11.823389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:32.230 [2024-12-11 15:00:11.823411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:38752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.230 [2024-12-11 15:00:11.823427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:32.230 [2024-12-11 15:00:11.823449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:38816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.230 [2024-12-11 15:00:11.823465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:32.230 [2024-12-11 15:00:11.823487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:38632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.230 [2024-12-11 15:00:11.823507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:32.230 [2024-12-11 15:00:11.823530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:38976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.230 [2024-12-11 15:00:11.823553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:32.230 [2024-12-11 15:00:11.823578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:39040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.230 [2024-12-11 15:00:11.823595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:32.230 [2024-12-11 15:00:11.823618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:38728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.230 [2024-12-11 15:00:11.823634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:32.230 [2024-12-11 15:00:11.823657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:39256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.230 [2024-12-11 15:00:11.823673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:32.230 [2024-12-11 15:00:11.823695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:39272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.230 [2024-12-11 15:00:11.823711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:32.230 [2024-12-11 15:00:11.823732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.230 [2024-12-11 15:00:11.823748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:32.230 [2024-12-11 15:00:11.823770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.230 [2024-12-11 15:00:11.823787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:32.230 [2024-12-11 15:00:11.823809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:38920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.231 [2024-12-11 15:00:11.823825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:32.231 [2024-12-11 15:00:11.823848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:38792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.231 [2024-12-11 15:00:11.823864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:32.231 [2024-12-11 15:00:11.826908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:38600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.231 [2024-12-11 15:00:11.826937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:32.231 [2024-12-11 15:00:11.826965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:38880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.231 [2024-12-11 15:00:11.826984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:32.231 [2024-12-11 15:00:11.827007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:39328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.231 [2024-12-11 15:00:11.827024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:32.231 [2024-12-11 15:00:11.827052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:39344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.231 [2024-12-11 15:00:11.827069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:32.231 [2024-12-11 15:00:11.827091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:39360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.231 [2024-12-11 15:00:11.827107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:32.231 [2024-12-11 15:00:11.827129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:39376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.231 [2024-12-11 15:00:11.827145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.231 [2024-12-11 15:00:11.827167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:39392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.231 [2024-12-11 15:00:11.827183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:32.231 [2024-12-11 15:00:11.827204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.231 [2024-12-11 15:00:11.827220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:32.231 [2024-12-11 15:00:11.827242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:39424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.231 [2024-12-11 15:00:11.827259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:32.231 [2024-12-11 15:00:11.827281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:38968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.231 [2024-12-11 15:00:11.827296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:32.231 [2024-12-11 15:00:11.827318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:39000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.231 [2024-12-11 15:00:11.827334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:32.231 [2024-12-11 15:00:11.827355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:39032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.231 [2024-12-11 15:00:11.827372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:32.231 [2024-12-11 15:00:11.827393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:39064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.231 [2024-12-11 15:00:11.827409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:32.231 [2024-12-11 15:00:11.827431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:39184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.231 [2024-12-11 15:00:11.827446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:32.231 [2024-12-11 15:00:11.827468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:39216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.231 [2024-12-11 15:00:11.827484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:32.231 [2024-12-11 15:00:11.827510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.231 [2024-12-11 15:00:11.827527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:32.231 [2024-12-11 15:00:11.827556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:38856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.231 [2024-12-11 15:00:11.827574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:32.231 [2024-12-11 15:00:11.827597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:39072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.231 [2024-12-11 15:00:11.827613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:32.231 [2024-12-11 15:00:11.827636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:37536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.231 [2024-12-11 15:00:11.827652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:32.231 [2024-12-11 15:00:11.827673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:38504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.231 [2024-12-11 15:00:11.827689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:32.231 [2024-12-11 15:00:11.827710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:38040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.231 [2024-12-11 15:00:11.827726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:32.231 [2024-12-11 15:00:11.827748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:38424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.231 [2024-12-11 15:00:11.827764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:32.231 [2024-12-11 15:00:11.827786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:39136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.231 [2024-12-11 15:00:11.827802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:32.231 [2024-12-11 15:00:11.827823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:38752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.231 [2024-12-11 15:00:11.827839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:32.231 [2024-12-11 15:00:11.827861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:38632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.231 [2024-12-11 15:00:11.827877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:32.231 [2024-12-11 15:00:11.827899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.231 [2024-12-11 15:00:11.827915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:32.231 [2024-12-11 15:00:11.827937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:39256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.231 [2024-12-11 15:00:11.827952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:32.231 [2024-12-11 15:00:11.827974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:39288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.231 [2024-12-11 15:00:11.827994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:32.231 [2024-12-11 15:00:11.828016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:38920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.231 [2024-12-11 15:00:11.828032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:32.231 [2024-12-11 15:00:11.828055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:38680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.231 [2024-12-11 15:00:11.828070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:32.231 [2024-12-11 15:00:11.828092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:38912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.231 [2024-12-11 15:00:11.828108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:32.231 [2024-12-11 15:00:11.828130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:38520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.231 [2024-12-11 15:00:11.828146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:32.231 [2024-12-11 15:00:11.828167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.231 [2024-12-11 15:00:11.828183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:32.231 [2024-12-11 15:00:11.828205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:39096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.231 [2024-12-11 15:00:11.828221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:32.231 [2024-12-11 15:00:11.828243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:39128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.231 [2024-12-11 15:00:11.828259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:32.231 [2024-12-11 15:00:11.828280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:38960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.231 [2024-12-11 15:00:11.828296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:32.231 [2024-12-11 15:00:11.828333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:39024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.231 [2024-12-11 15:00:11.828348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:32.231 [2024-12-11 15:00:11.828370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:39448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.231 [2024-12-11 15:00:11.828385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:32.232 [2024-12-11 15:00:11.828406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:39464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.232 [2024-12-11 15:00:11.828437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:32.232 [2024-12-11 15:00:11.828460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:39480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.232 [2024-12-11 15:00:11.828479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:32.232 [2024-12-11 15:00:11.828502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:39496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.232 [2024-12-11 15:00:11.828518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:32.232 [2024-12-11 15:00:11.828540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:39512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.232 [2024-12-11 15:00:11.828573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:32.232 [2024-12-11 15:00:11.828604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:39528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.232 [2024-12-11 15:00:11.828621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:32.232 [2024-12-11 15:00:11.828643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.232 [2024-12-11 15:00:11.828659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:32.232 [2024-12-11 15:00:11.828681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:39560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.232 [2024-12-11 15:00:11.828697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:32.232 [2024-12-11 15:00:11.828719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:39576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.232 [2024-12-11 15:00:11.828735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:32.232 [2024-12-11 15:00:11.828757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:39160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.232 [2024-12-11 15:00:11.828773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:32.232 [2024-12-11 15:00:11.828794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:39192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.232 [2024-12-11 15:00:11.828811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:32.232 [2024-12-11 15:00:11.828832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:39224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.232 [2024-12-11 15:00:11.828864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:32.232 [2024-12-11 15:00:11.828886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:39056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.232 [2024-12-11 15:00:11.828901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:32.232 [2024-12-11 15:00:11.831848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:38928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.232 [2024-12-11 15:00:11.831877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:32.232 [2024-12-11 15:00:11.831925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:38568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.232 [2024-12-11 15:00:11.831945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:32.232 [2024-12-11 15:00:11.831975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:39088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.232 [2024-12-11 15:00:11.831992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:32.232 [2024-12-11 15:00:11.832015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:39152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.232 [2024-12-11 15:00:11.832031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:32.232 [2024-12-11 15:00:11.832067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:39600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.232 [2024-12-11 15:00:11.832083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:32.232 [2024-12-11 15:00:11.832105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:39616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.232 [2024-12-11 15:00:11.832120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:32.232 [2024-12-11 15:00:11.832142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:39632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.232 [2024-12-11 15:00:11.832157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:32.232 [2024-12-11 15:00:11.832178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:39648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.232 [2024-12-11 15:00:11.832193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:32.232 [2024-12-11 15:00:11.832214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:39664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.232 [2024-12-11 15:00:11.832230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:32.232 [2024-12-11 15:00:11.832251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:39008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.232 [2024-12-11 15:00:11.832266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:32.232 [2024-12-11 15:00:11.832303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:39280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.232 [2024-12-11 15:00:11.832319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:32.232 [2024-12-11 15:00:11.832341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:39312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.232 [2024-12-11 15:00:11.832357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:32.232 [2024-12-11 15:00:11.832385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:39680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.232 [2024-12-11 15:00:11.832401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:32.232 [2024-12-11 15:00:11.832424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:38880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.232 [2024-12-11 15:00:11.832440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:32.232 [2024-12-11 15:00:11.832466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:39344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.232 [2024-12-11 15:00:11.832483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:32.232 [2024-12-11 15:00:11.832505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.232 [2024-12-11 15:00:11.832522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:32.232 [2024-12-11 15:00:11.832554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.232 [2024-12-11 15:00:11.832584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:32.232 [2024-12-11 15:00:11.832611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:38968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.232 [2024-12-11 15:00:11.832628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:32.232 [2024-12-11 15:00:11.832651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:39032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.232 [2024-12-11 15:00:11.832667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:32.232 [2024-12-11 15:00:11.832689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.232 [2024-12-11 15:00:11.832706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:32.232 [2024-12-11 15:00:11.832728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:39248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.232 [2024-12-11 15:00:11.832744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:32.232 [2024-12-11 15:00:11.832766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:39072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.232 [2024-12-11 15:00:11.832783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:32.232 [2024-12-11 15:00:11.832805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:38504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.232 [2024-12-11 15:00:11.832821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:32.232 [2024-12-11 15:00:11.832842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:38424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.232 [2024-12-11 15:00:11.832858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:32.232 [2024-12-11 15:00:11.832880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:38752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.232 [2024-12-11 15:00:11.832896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:32.232 [2024-12-11 15:00:11.832918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:39040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.232 [2024-12-11 15:00:11.832934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:32.232 [2024-12-11 15:00:11.832957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:39288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.232 [2024-12-11 15:00:11.832977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:32.232 [2024-12-11 15:00:11.833000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:38680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.232 [2024-12-11 15:00:11.833016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:32.233 [2024-12-11 15:00:11.833038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:38520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.233 [2024-12-11 15:00:11.833055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:32.233 [2024-12-11 15:00:11.833076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:39096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.233 [2024-12-11 15:00:11.833092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:32.233 [2024-12-11 15:00:11.833114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:38960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.233 [2024-12-11 15:00:11.833131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:32.233 [2024-12-11 15:00:11.833153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:39448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.233 [2024-12-11 15:00:11.833169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:32.233 [2024-12-11 15:00:11.833191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:39480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.233 [2024-12-11 15:00:11.833207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:32.233 [2024-12-11 15:00:11.833228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:39512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.233 [2024-12-11 15:00:11.833244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:32.233 [2024-12-11 15:00:11.833266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:39544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.233 [2024-12-11 15:00:11.833282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:32.233 [2024-12-11 15:00:11.833303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:39576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.233 [2024-12-11 15:00:11.833320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:32.233 [2024-12-11 15:00:11.833342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:39192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.233 [2024-12-11 15:00:11.833358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:32.233 [2024-12-11 15:00:11.833380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:39056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.233 [2024-12-11 15:00:11.833396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:32.233 [2024-12-11 15:00:11.833418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:39336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.233 [2024-12-11 15:00:11.833440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:32.233 [2024-12-11 15:00:11.833464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:39368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.233 [2024-12-11 15:00:11.833481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:32.233 [2024-12-11 15:00:11.833503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:39400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.233 [2024-12-11 15:00:11.833519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:32.233 [2024-12-11 15:00:11.833542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:39168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.233 [2024-12-11 15:00:11.833571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:32.233 [2024-12-11 15:00:11.833604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:39232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.233 [2024-12-11 15:00:11.833621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:32.233 [2024-12-11 15:00:11.833643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:39104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.233 [2024-12-11 15:00:11.833659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:32.233 [2024-12-11 15:00:11.833681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:39272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.233 [2024-12-11 15:00:11.833697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:32.233 [2024-12-11 15:00:11.834461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:39696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.233 [2024-12-11 15:00:11.834484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:32.233 [2024-12-11 15:00:11.834512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.233 [2024-12-11 15:00:11.834530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:32.233 [2024-12-11 15:00:11.834564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.233 [2024-12-11 15:00:11.834595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:32.233 [2024-12-11 15:00:11.834618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:39744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.233 [2024-12-11 15:00:11.834635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:32.233 [2024-12-11 15:00:11.834657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:39760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.233 [2024-12-11 15:00:11.834673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:32.233 [2024-12-11 15:00:11.834696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:39776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.233 [2024-12-11 15:00:11.834712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:32.233 [2024-12-11 15:00:11.834742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:39792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.233 [2024-12-11 15:00:11.834759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:32.233 [2024-12-11 15:00:11.834781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.233 [2024-12-11 15:00:11.834797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:32.233 [2024-12-11 15:00:11.834819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:39824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.233 [2024-12-11 15:00:11.834846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:32.233 [2024-12-11 15:00:11.834868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:39840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.233 [2024-12-11 15:00:11.834884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:32.233 [2024-12-11 15:00:11.834910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.233 [2024-12-11 15:00:11.834926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:32.233 [2024-12-11 15:00:11.834949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:39440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.233 [2024-12-11 15:00:11.834965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:32.233 [2024-12-11 15:00:11.834987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:39472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.233 [2024-12-11 15:00:11.835003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:32.233 [2024-12-11 15:00:11.835025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:39504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.233 [2024-12-11 15:00:11.835041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:32.233 [2024-12-11 15:00:11.835063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:39536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.233 [2024-12-11 15:00:11.835079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:32.233 [2024-12-11 15:00:11.835101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:39568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.233 [2024-12-11 15:00:11.835117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:32.233 7860.75 IOPS, 30.71 MiB/s [2024-12-11T14:00:15.006Z] 7877.27 IOPS, 30.77 MiB/s [2024-12-11T14:00:15.006Z] 7888.53 IOPS, 30.81 MiB/s [2024-12-11T14:00:15.006Z] Received shutdown signal, test time was about 34.576574 seconds 00:23:32.233 00:23:32.233 Latency(us) 00:23:32.233 [2024-12-11T14:00:15.006Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:32.233 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:32.233 Verification LBA range: start 0x0 length 0x4000 00:23:32.233 Nvme0n1 : 34.58 7891.19 30.82 0.00 0.00 16194.28 168.39 4026531.84 00:23:32.233 [2024-12-11T14:00:15.006Z] =================================================================================================================== 00:23:32.233 [2024-12-11T14:00:15.006Z] Total : 7891.19 30.82 0.00 0.00 16194.28 168.39 4026531.84 00:23:32.233 15:00:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:32.495 15:00:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:23:32.495 15:00:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:32.495 15:00:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:23:32.495 15:00:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:32.495 15:00:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:23:32.495 15:00:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:32.495 15:00:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:23:32.495 15:00:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:32.495 15:00:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:32.495 rmmod nvme_tcp 00:23:32.495 rmmod nvme_fabrics 00:23:32.495 rmmod nvme_keyring 00:23:32.495 15:00:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:32.495 15:00:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:23:32.495 15:00:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:23:32.495 15:00:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 749248 ']' 00:23:32.495 15:00:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 749248 00:23:32.495 15:00:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 749248 ']' 00:23:32.495 15:00:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 749248 00:23:32.495 15:00:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:23:32.495 15:00:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:32.495 15:00:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 749248 00:23:32.495 15:00:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:32.495 15:00:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:32.495 15:00:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 749248' 00:23:32.495 killing process with pid 749248 00:23:32.495 15:00:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 749248 00:23:32.495 15:00:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 749248 00:23:32.753 15:00:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:32.753 15:00:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:32.753 15:00:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:32.753 15:00:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:23:32.753 15:00:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:23:32.753 15:00:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:32.753 15:00:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:23:32.753 15:00:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:32.753 15:00:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:32.753 15:00:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:32.753 15:00:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:32.753 15:00:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:34.658 15:00:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:34.658 00:23:34.658 real 0m43.720s 00:23:34.658 user 2m13.694s 00:23:34.658 sys 0m10.690s 00:23:34.658 15:00:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:34.658 15:00:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:34.658 ************************************ 00:23:34.658 END TEST nvmf_host_multipath_status 00:23:34.658 ************************************ 00:23:34.917 15:00:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:23:34.917 15:00:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:34.917 15:00:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:34.917 15:00:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.917 ************************************ 00:23:34.917 START TEST nvmf_discovery_remove_ifc 00:23:34.917 ************************************ 00:23:34.917 15:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:23:34.917 * Looking for test storage... 00:23:34.917 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:34.917 15:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:34.917 15:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:23:34.917 15:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:34.917 15:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:34.917 15:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:34.917 15:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:34.917 15:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:34.917 15:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:23:34.917 15:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:23:34.917 15:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:23:34.917 15:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:23:34.917 15:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:23:34.917 15:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:23:34.917 15:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:23:34.917 15:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:34.917 15:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:23:34.917 15:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:23:34.917 15:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:34.917 15:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:34.917 15:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:23:34.917 15:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:23:34.917 15:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:34.917 15:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:23:34.917 15:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:23:34.917 15:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:23:34.917 15:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:23:34.917 15:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:34.917 15:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:23:34.917 15:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:23:34.917 15:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:34.917 15:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:34.917 15:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:23:34.917 15:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:34.917 15:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:34.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:34.917 --rc genhtml_branch_coverage=1 00:23:34.917 --rc genhtml_function_coverage=1 00:23:34.917 --rc genhtml_legend=1 00:23:34.917 --rc geninfo_all_blocks=1 00:23:34.917 --rc geninfo_unexecuted_blocks=1 00:23:34.917 00:23:34.917 ' 00:23:34.918 15:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:34.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:34.918 --rc genhtml_branch_coverage=1 00:23:34.918 --rc genhtml_function_coverage=1 00:23:34.918 --rc genhtml_legend=1 00:23:34.918 --rc geninfo_all_blocks=1 00:23:34.918 --rc geninfo_unexecuted_blocks=1 00:23:34.918 00:23:34.918 ' 00:23:34.918 15:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:34.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:34.918 --rc genhtml_branch_coverage=1 00:23:34.918 --rc genhtml_function_coverage=1 00:23:34.918 --rc genhtml_legend=1 00:23:34.918 --rc geninfo_all_blocks=1 00:23:34.918 --rc geninfo_unexecuted_blocks=1 00:23:34.918 00:23:34.918 ' 00:23:34.918 15:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:34.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:34.918 --rc genhtml_branch_coverage=1 00:23:34.918 --rc genhtml_function_coverage=1 00:23:34.918 --rc genhtml_legend=1 00:23:34.918 --rc geninfo_all_blocks=1 00:23:34.918 --rc geninfo_unexecuted_blocks=1 00:23:34.918 00:23:34.918 ' 00:23:34.918 15:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:34.918 15:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:23:34.918 15:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:34.918 15:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:34.918 15:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:34.918 15:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:34.918 15:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:34.918 15:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:34.918 15:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:34.918 15:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:34.918 15:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:34.918 15:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:34.918 15:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:34.918 15:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:34.918 15:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:34.918 15:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:34.918 15:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:34.918 15:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:34.918 15:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:34.918 15:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:23:34.918 15:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:34.918 15:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:34.918 15:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:34.918 15:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:34.918 15:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:34.918 15:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:34.918 15:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:23:34.918 15:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:34.918 15:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:23:34.918 15:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:34.918 15:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:34.918 15:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:34.918 15:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:34.918 15:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:34.918 15:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:34.918 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:34.918 15:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:34.918 15:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:34.918 15:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:34.918 15:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:23:34.918 15:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:23:34.918 15:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:23:34.918 15:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:23:34.918 15:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:23:34.918 15:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:23:34.918 15:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:23:34.918 15:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:34.918 15:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:34.918 15:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:34.918 15:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:34.918 15:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:34.918 15:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:34.918 15:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:34.918 15:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:34.918 15:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:34.918 15:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:34.918 15:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:23:34.918 15:00:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:37.450 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:37.450 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:23:37.450 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:37.450 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:37.450 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:37.450 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:37.450 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:37.450 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:23:37.450 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:37.450 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:23:37.450 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:23:37.450 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:23:37.450 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:23:37.451 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:23:37.451 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:23:37.451 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:37.451 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:37.451 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:37.451 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:37.451 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:37.451 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:37.451 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:37.451 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:37.451 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:37.451 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:37.451 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:37.451 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:37.451 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:37.451 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:37.451 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:37.451 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:37.451 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:37.451 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:37.451 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:37.451 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:37.451 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:37.451 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:37.451 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:37.451 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:37.451 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:37.451 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:37.451 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:37.451 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:37.451 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:37.451 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:37.451 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:37.451 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:37.451 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:37.451 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:37.451 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:37.451 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:37.451 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:37.451 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:37.451 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:37.451 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:37.451 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:37.451 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:37.451 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:37.451 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:37.451 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:37.451 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:37.451 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:37.451 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:37.451 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:37.451 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:37.451 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:37.451 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:37.451 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:37.451 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:37.451 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:37.451 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:37.451 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:37.451 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:37.451 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:23:37.451 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:37.451 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:37.451 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:37.451 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:37.451 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:37.451 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:37.451 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:37.451 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:37.451 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:37.451 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:37.451 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:37.451 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:37.451 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:37.451 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:37.451 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:37.451 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:37.451 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:37.451 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:37.451 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:37.451 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:37.451 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:37.451 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:37.451 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:37.451 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:37.451 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:37.451 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:37.451 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:37.451 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.157 ms 00:23:37.451 00:23:37.451 --- 10.0.0.2 ping statistics --- 00:23:37.451 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:37.451 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:23:37.451 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:37.451 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:37.451 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:23:37.451 00:23:37.451 --- 10.0.0.1 ping statistics --- 00:23:37.451 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:37.451 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:23:37.451 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:37.451 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:23:37.451 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:37.451 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:37.451 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:37.451 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:37.451 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:37.451 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:37.451 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:37.451 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:23:37.451 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:37.451 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:37.451 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:37.451 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=756625 00:23:37.452 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:37.452 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 756625 00:23:37.452 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 756625 ']' 00:23:37.452 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:37.452 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:37.452 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:37.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:37.452 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:37.452 15:00:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:37.452 [2024-12-11 15:00:19.987029] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:23:37.452 [2024-12-11 15:00:19.987104] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:37.452 [2024-12-11 15:00:20.065679] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:37.452 [2024-12-11 15:00:20.122968] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:37.452 [2024-12-11 15:00:20.123044] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:37.452 [2024-12-11 15:00:20.123058] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:37.452 [2024-12-11 15:00:20.123069] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:37.452 [2024-12-11 15:00:20.123079] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:37.452 [2024-12-11 15:00:20.123749] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:37.710 15:00:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:37.710 15:00:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:23:37.710 15:00:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:37.710 15:00:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:37.710 15:00:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:37.710 15:00:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:37.710 15:00:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:23:37.710 15:00:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.710 15:00:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:37.710 [2024-12-11 15:00:20.278276] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:37.710 [2024-12-11 15:00:20.286489] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:37.710 null0 00:23:37.710 [2024-12-11 15:00:20.318419] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:37.710 15:00:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.710 15:00:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=756646 00:23:37.710 15:00:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:23:37.710 15:00:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 756646 /tmp/host.sock 00:23:37.710 15:00:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 756646 ']' 00:23:37.710 15:00:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:23:37.710 15:00:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:37.710 15:00:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:37.710 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:37.710 15:00:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:37.710 15:00:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:37.710 [2024-12-11 15:00:20.386754] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:23:37.710 [2024-12-11 15:00:20.386842] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid756646 ] 00:23:37.710 [2024-12-11 15:00:20.457126] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:37.968 [2024-12-11 15:00:20.517742] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:23:37.968 15:00:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:37.968 15:00:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:23:37.968 15:00:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:37.968 15:00:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:23:37.968 15:00:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.968 15:00:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:37.968 15:00:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.968 15:00:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:23:37.968 15:00:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.968 15:00:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:38.226 15:00:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.226 15:00:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:23:38.226 15:00:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.226 15:00:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:39.189 [2024-12-11 15:00:21.795962] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:39.189 [2024-12-11 15:00:21.796001] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:39.189 [2024-12-11 15:00:21.796026] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:39.189 [2024-12-11 15:00:21.923443] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:23:39.470 [2024-12-11 15:00:22.024453] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:23:39.470 [2024-12-11 15:00:22.025672] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1982590:1 started. 00:23:39.470 [2024-12-11 15:00:22.027401] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:23:39.470 [2024-12-11 15:00:22.027463] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:23:39.470 [2024-12-11 15:00:22.027503] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:23:39.470 [2024-12-11 15:00:22.027527] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:39.470 [2024-12-11 15:00:22.027586] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:39.470 15:00:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.470 15:00:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:23:39.470 15:00:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:39.470 15:00:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:39.470 15:00:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:39.470 15:00:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.470 15:00:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:39.470 15:00:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:39.470 15:00:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:39.470 [2024-12-11 15:00:22.034700] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1982590 was disconnected and freed. delete nvme_qpair. 00:23:39.470 15:00:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.470 15:00:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:23:39.470 15:00:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:23:39.470 15:00:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:23:39.470 15:00:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:23:39.470 15:00:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:39.470 15:00:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:39.470 15:00:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:39.470 15:00:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.470 15:00:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:39.470 15:00:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:39.470 15:00:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:39.470 15:00:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.470 15:00:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:39.470 15:00:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:40.403 15:00:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:40.403 15:00:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:40.403 15:00:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:40.403 15:00:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.403 15:00:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:40.403 15:00:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:40.403 15:00:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:40.661 15:00:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.661 15:00:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:40.661 15:00:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:41.594 15:00:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:41.594 15:00:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:41.594 15:00:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:41.594 15:00:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:41.594 15:00:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.594 15:00:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:41.594 15:00:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:41.594 15:00:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.594 15:00:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:41.594 15:00:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:42.529 15:00:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:42.530 15:00:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:42.530 15:00:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:42.530 15:00:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.530 15:00:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:42.530 15:00:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:42.530 15:00:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:42.530 15:00:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.530 15:00:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:42.530 15:00:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:43.903 15:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:43.903 15:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:43.903 15:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:43.903 15:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.903 15:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:43.903 15:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:43.903 15:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:43.903 15:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.903 15:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:43.903 15:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:44.836 15:00:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:44.836 15:00:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:44.836 15:00:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:44.836 15:00:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.836 15:00:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:44.836 15:00:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:44.836 15:00:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:44.836 15:00:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.836 15:00:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:44.836 15:00:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:44.836 [2024-12-11 15:00:27.468755] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:23:44.836 [2024-12-11 15:00:27.468816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.836 [2024-12-11 15:00:27.468851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.836 [2024-12-11 15:00:27.468878] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.836 [2024-12-11 15:00:27.468891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.836 [2024-12-11 15:00:27.468904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.836 [2024-12-11 15:00:27.468916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.836 [2024-12-11 15:00:27.468928] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.836 [2024-12-11 15:00:27.468940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.836 [2024-12-11 15:00:27.468952] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:44.836 [2024-12-11 15:00:27.468963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:44.836 [2024-12-11 15:00:27.468975] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x195ee10 is same with the state(6) to be set 00:23:44.836 [2024-12-11 15:00:27.478777] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x195ee10 (9): Bad file descriptor 00:23:44.837 [2024-12-11 15:00:27.488817] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:44.837 [2024-12-11 15:00:27.488857] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:44.837 [2024-12-11 15:00:27.488871] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:44.837 [2024-12-11 15:00:27.488881] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:44.837 [2024-12-11 15:00:27.488933] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:45.769 15:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:45.769 15:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:45.769 15:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:45.769 15:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.769 15:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:45.769 15:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:45.769 15:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:45.769 [2024-12-11 15:00:28.502572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:23:45.769 [2024-12-11 15:00:28.502624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x195ee10 with addr=10.0.0.2, port=4420 00:23:45.769 [2024-12-11 15:00:28.502641] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x195ee10 is same with the state(6) to be set 00:23:45.769 [2024-12-11 15:00:28.502667] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x195ee10 (9): Bad file descriptor 00:23:45.769 [2024-12-11 15:00:28.503039] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:23:45.769 [2024-12-11 15:00:28.503076] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:45.769 [2024-12-11 15:00:28.503093] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:45.769 [2024-12-11 15:00:28.503115] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:45.769 [2024-12-11 15:00:28.503128] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:45.769 [2024-12-11 15:00:28.503139] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:45.769 [2024-12-11 15:00:28.503146] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:45.770 [2024-12-11 15:00:28.503158] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:45.770 [2024-12-11 15:00:28.503166] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:45.770 15:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.770 15:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:45.770 15:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:47.142 [2024-12-11 15:00:29.505658] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:47.142 [2024-12-11 15:00:29.505707] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:47.142 [2024-12-11 15:00:29.505753] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:47.142 [2024-12-11 15:00:29.505768] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:47.142 [2024-12-11 15:00:29.505784] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:23:47.142 [2024-12-11 15:00:29.505798] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:47.142 [2024-12-11 15:00:29.505810] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:47.142 [2024-12-11 15:00:29.505818] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:47.142 [2024-12-11 15:00:29.505882] bdev_nvme.c:7267:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:23:47.142 [2024-12-11 15:00:29.505946] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.142 [2024-12-11 15:00:29.505969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.142 [2024-12-11 15:00:29.505989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.142 [2024-12-11 15:00:29.506002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.142 [2024-12-11 15:00:29.506015] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.142 [2024-12-11 15:00:29.506028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.142 [2024-12-11 15:00:29.506040] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.142 [2024-12-11 15:00:29.506052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.142 [2024-12-11 15:00:29.506066] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.142 [2024-12-11 15:00:29.506078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.142 [2024-12-11 15:00:29.506104] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:23:47.142 [2024-12-11 15:00:29.506159] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x194e560 (9): Bad file descriptor 00:23:47.142 [2024-12-11 15:00:29.507149] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:23:47.142 [2024-12-11 15:00:29.507170] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:23:47.142 15:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:47.142 15:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:47.142 15:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:47.142 15:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.142 15:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:47.142 15:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:47.142 15:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:47.142 15:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.142 15:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:23:47.142 15:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:47.142 15:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:47.142 15:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:23:47.142 15:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:47.142 15:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:47.142 15:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.142 15:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:47.142 15:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:47.142 15:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:47.142 15:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:47.143 15:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.143 15:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:47.143 15:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:48.074 15:00:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:48.074 15:00:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:48.074 15:00:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:48.074 15:00:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.074 15:00:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:48.074 15:00:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:48.074 15:00:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:48.074 15:00:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.074 15:00:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:48.074 15:00:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:49.006 [2024-12-11 15:00:31.561703] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:49.006 [2024-12-11 15:00:31.561736] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:49.006 [2024-12-11 15:00:31.561762] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:49.006 [2024-12-11 15:00:31.648038] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:23:49.006 15:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:49.006 15:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:49.006 15:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:49.006 15:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.006 15:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:49.006 15:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:49.006 15:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:49.006 15:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.006 15:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:49.006 15:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:49.006 [2024-12-11 15:00:31.743775] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:23:49.006 [2024-12-11 15:00:31.744615] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x1938460:1 started. 00:23:49.006 [2024-12-11 15:00:31.746096] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:23:49.006 [2024-12-11 15:00:31.746141] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:23:49.006 [2024-12-11 15:00:31.746175] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:23:49.006 [2024-12-11 15:00:31.746199] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:23:49.006 [2024-12-11 15:00:31.746211] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:49.006 [2024-12-11 15:00:31.750248] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x1938460 was disconnected and freed. delete nvme_qpair. 00:23:50.377 15:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:50.377 15:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:50.377 15:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:50.377 15:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.377 15:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:50.377 15:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:50.377 15:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:50.377 15:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.377 15:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:23:50.377 15:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:23:50.377 15:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 756646 00:23:50.377 15:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 756646 ']' 00:23:50.377 15:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 756646 00:23:50.377 15:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:23:50.377 15:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:50.377 15:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 756646 00:23:50.377 15:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:50.377 15:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:50.377 15:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 756646' 00:23:50.377 killing process with pid 756646 00:23:50.377 15:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 756646 00:23:50.377 15:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 756646 00:23:50.377 15:00:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:23:50.377 15:00:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:50.377 15:00:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:23:50.377 15:00:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:50.377 15:00:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:23:50.377 15:00:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:50.377 15:00:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:50.377 rmmod nvme_tcp 00:23:50.377 rmmod nvme_fabrics 00:23:50.377 rmmod nvme_keyring 00:23:50.377 15:00:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:50.377 15:00:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:23:50.378 15:00:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:23:50.378 15:00:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 756625 ']' 00:23:50.378 15:00:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 756625 00:23:50.378 15:00:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 756625 ']' 00:23:50.378 15:00:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 756625 00:23:50.378 15:00:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:23:50.378 15:00:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:50.378 15:00:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 756625 00:23:50.378 15:00:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:50.378 15:00:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:50.378 15:00:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 756625' 00:23:50.378 killing process with pid 756625 00:23:50.378 15:00:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 756625 00:23:50.378 15:00:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 756625 00:23:50.637 15:00:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:50.637 15:00:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:50.637 15:00:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:50.637 15:00:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:23:50.637 15:00:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:23:50.637 15:00:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:50.637 15:00:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:23:50.637 15:00:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:50.637 15:00:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:50.637 15:00:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:50.637 15:00:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:50.637 15:00:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:53.179 15:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:53.179 00:23:53.179 real 0m17.951s 00:23:53.179 user 0m25.981s 00:23:53.179 sys 0m3.141s 00:23:53.179 15:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:53.179 15:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:53.179 ************************************ 00:23:53.179 END TEST nvmf_discovery_remove_ifc 00:23:53.179 ************************************ 00:23:53.179 15:00:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:23:53.179 15:00:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:53.179 15:00:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:53.179 15:00:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.179 ************************************ 00:23:53.179 START TEST nvmf_identify_kernel_target 00:23:53.179 ************************************ 00:23:53.179 15:00:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:23:53.179 * Looking for test storage... 00:23:53.179 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:53.179 15:00:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:53.179 15:00:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:23:53.179 15:00:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:53.179 15:00:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:53.179 15:00:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:53.179 15:00:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:53.179 15:00:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:53.179 15:00:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:23:53.179 15:00:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:23:53.179 15:00:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:23:53.179 15:00:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:23:53.179 15:00:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:23:53.179 15:00:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:23:53.179 15:00:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:23:53.179 15:00:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:53.179 15:00:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:23:53.179 15:00:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:23:53.179 15:00:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:53.179 15:00:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:53.179 15:00:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:23:53.179 15:00:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:23:53.179 15:00:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:53.179 15:00:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:23:53.179 15:00:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:23:53.179 15:00:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:23:53.179 15:00:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:23:53.179 15:00:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:53.179 15:00:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:23:53.179 15:00:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:23:53.179 15:00:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:53.179 15:00:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:53.179 15:00:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:23:53.179 15:00:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:53.179 15:00:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:53.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:53.179 --rc genhtml_branch_coverage=1 00:23:53.179 --rc genhtml_function_coverage=1 00:23:53.179 --rc genhtml_legend=1 00:23:53.179 --rc geninfo_all_blocks=1 00:23:53.179 --rc geninfo_unexecuted_blocks=1 00:23:53.179 00:23:53.179 ' 00:23:53.179 15:00:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:53.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:53.179 --rc genhtml_branch_coverage=1 00:23:53.179 --rc genhtml_function_coverage=1 00:23:53.179 --rc genhtml_legend=1 00:23:53.179 --rc geninfo_all_blocks=1 00:23:53.179 --rc geninfo_unexecuted_blocks=1 00:23:53.179 00:23:53.179 ' 00:23:53.179 15:00:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:53.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:53.179 --rc genhtml_branch_coverage=1 00:23:53.179 --rc genhtml_function_coverage=1 00:23:53.179 --rc genhtml_legend=1 00:23:53.179 --rc geninfo_all_blocks=1 00:23:53.179 --rc geninfo_unexecuted_blocks=1 00:23:53.179 00:23:53.179 ' 00:23:53.180 15:00:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:53.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:53.180 --rc genhtml_branch_coverage=1 00:23:53.180 --rc genhtml_function_coverage=1 00:23:53.180 --rc genhtml_legend=1 00:23:53.180 --rc geninfo_all_blocks=1 00:23:53.180 --rc geninfo_unexecuted_blocks=1 00:23:53.180 00:23:53.180 ' 00:23:53.180 15:00:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:53.180 15:00:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:23:53.180 15:00:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:53.180 15:00:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:53.180 15:00:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:53.180 15:00:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:53.180 15:00:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:53.180 15:00:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:53.180 15:00:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:53.180 15:00:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:53.180 15:00:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:53.180 15:00:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:53.180 15:00:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:53.180 15:00:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:53.180 15:00:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:53.180 15:00:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:53.180 15:00:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:53.180 15:00:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:53.180 15:00:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:53.180 15:00:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:23:53.180 15:00:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:53.180 15:00:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:53.180 15:00:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:53.180 15:00:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:53.180 15:00:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:53.180 15:00:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:53.180 15:00:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:23:53.180 15:00:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:53.180 15:00:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:23:53.180 15:00:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:53.180 15:00:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:53.180 15:00:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:53.180 15:00:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:53.180 15:00:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:53.180 15:00:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:53.180 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:53.180 15:00:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:53.180 15:00:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:53.180 15:00:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:53.180 15:00:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:23:53.180 15:00:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:53.180 15:00:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:53.180 15:00:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:53.180 15:00:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:53.180 15:00:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:53.180 15:00:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:53.180 15:00:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:53.180 15:00:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:53.180 15:00:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:53.180 15:00:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:53.180 15:00:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:23:53.180 15:00:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:23:55.088 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:55.088 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:23:55.088 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:55.088 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:55.088 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:55.088 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:55.088 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:55.088 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:23:55.088 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:55.088 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:23:55.088 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:23:55.088 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:23:55.088 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:23:55.088 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:23:55.088 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:23:55.088 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:55.088 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:55.088 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:55.088 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:55.088 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:55.088 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:55.088 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:55.088 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:55.088 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:55.088 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:55.088 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:55.088 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:55.088 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:55.088 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:55.088 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:55.088 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:55.088 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:55.088 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:55.088 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:55.088 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:55.088 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:55.088 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:55.088 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:55.088 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:55.088 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:55.088 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:55.088 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:55.088 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:55.088 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:55.088 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:55.088 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:55.088 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:55.088 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:55.088 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:55.088 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:55.088 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:55.088 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:55.088 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:55.088 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:55.088 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:55.088 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:55.088 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:55.088 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:55.088 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:55.088 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:55.088 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:55.088 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:55.088 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:55.088 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:55.088 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:55.088 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:55.088 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:55.088 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:55.088 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:55.088 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:55.088 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:55.088 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:55.088 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:55.088 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:23:55.088 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:55.088 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:55.088 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:55.088 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:55.088 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:55.088 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:55.088 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:55.088 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:55.088 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:55.088 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:55.088 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:55.088 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:55.088 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:55.088 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:55.088 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:55.088 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:55.088 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:55.088 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:55.348 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:55.348 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:55.348 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:55.348 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:55.348 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:55.348 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:55.348 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:55.348 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:55.348 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:55.348 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.332 ms 00:23:55.348 00:23:55.348 --- 10.0.0.2 ping statistics --- 00:23:55.348 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:55.348 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:23:55.348 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:55.348 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:55.348 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:23:55.348 00:23:55.348 --- 10.0.0.1 ping statistics --- 00:23:55.348 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:55.348 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:23:55.348 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:55.348 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:23:55.348 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:55.348 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:55.348 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:55.348 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:55.348 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:55.348 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:55.348 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:55.348 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:23:55.348 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:23:55.348 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:23:55.348 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:55.348 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:55.348 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:55.348 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:55.348 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:55.348 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:55.348 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:55.348 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:55.348 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:55.348 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:23:55.348 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:23:55.348 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:23:55.348 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:23:55.348 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:55.348 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:55.348 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:55.348 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:23:55.348 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:23:55.348 15:00:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:23:55.348 15:00:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:55.348 15:00:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:23:56.723 Waiting for block devices as requested 00:23:56.723 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:23:56.723 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:56.723 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:56.981 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:56.981 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:56.981 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:56.981 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:57.241 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:57.241 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:57.241 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:57.241 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:57.500 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:57.500 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:57.500 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:57.758 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:57.758 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:57.758 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:57.758 15:00:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:23:57.758 15:00:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:57.758 15:00:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:23:57.758 15:00:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:23:57.758 15:00:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:57.758 15:00:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:23:57.758 15:00:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:23:57.758 15:00:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:23:57.758 15:00:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:23:58.016 No valid GPT data, bailing 00:23:58.016 15:00:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:58.016 15:00:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:23:58.016 15:00:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:23:58.016 15:00:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:23:58.016 15:00:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:23:58.017 15:00:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:58.017 15:00:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:58.017 15:00:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:58.017 15:00:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:23:58.017 15:00:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:23:58.017 15:00:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:23:58.017 15:00:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:23:58.017 15:00:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:23:58.017 15:00:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:23:58.017 15:00:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:23:58.017 15:00:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:23:58.017 15:00:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:58.017 15:00:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:23:58.017 00:23:58.017 Discovery Log Number of Records 2, Generation counter 2 00:23:58.017 =====Discovery Log Entry 0====== 00:23:58.017 trtype: tcp 00:23:58.017 adrfam: ipv4 00:23:58.017 subtype: current discovery subsystem 00:23:58.017 treq: not specified, sq flow control disable supported 00:23:58.017 portid: 1 00:23:58.017 trsvcid: 4420 00:23:58.017 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:58.017 traddr: 10.0.0.1 00:23:58.017 eflags: none 00:23:58.017 sectype: none 00:23:58.017 =====Discovery Log Entry 1====== 00:23:58.017 trtype: tcp 00:23:58.017 adrfam: ipv4 00:23:58.017 subtype: nvme subsystem 00:23:58.017 treq: not specified, sq flow control disable supported 00:23:58.017 portid: 1 00:23:58.017 trsvcid: 4420 00:23:58.017 subnqn: nqn.2016-06.io.spdk:testnqn 00:23:58.017 traddr: 10.0.0.1 00:23:58.017 eflags: none 00:23:58.017 sectype: none 00:23:58.017 15:00:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:23:58.017 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:23:58.278 ===================================================== 00:23:58.278 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:58.278 ===================================================== 00:23:58.278 Controller Capabilities/Features 00:23:58.278 ================================ 00:23:58.278 Vendor ID: 0000 00:23:58.278 Subsystem Vendor ID: 0000 00:23:58.278 Serial Number: 3ff320cbcf5c7fecb326 00:23:58.278 Model Number: Linux 00:23:58.278 Firmware Version: 6.8.9-20 00:23:58.278 Recommended Arb Burst: 0 00:23:58.278 IEEE OUI Identifier: 00 00 00 00:23:58.278 Multi-path I/O 00:23:58.278 May have multiple subsystem ports: No 00:23:58.278 May have multiple controllers: No 00:23:58.278 Associated with SR-IOV VF: No 00:23:58.278 Max Data Transfer Size: Unlimited 00:23:58.278 Max Number of Namespaces: 0 00:23:58.278 Max Number of I/O Queues: 1024 00:23:58.278 NVMe Specification Version (VS): 1.3 00:23:58.278 NVMe Specification Version (Identify): 1.3 00:23:58.278 Maximum Queue Entries: 1024 00:23:58.278 Contiguous Queues Required: No 00:23:58.278 Arbitration Mechanisms Supported 00:23:58.278 Weighted Round Robin: Not Supported 00:23:58.278 Vendor Specific: Not Supported 00:23:58.278 Reset Timeout: 7500 ms 00:23:58.278 Doorbell Stride: 4 bytes 00:23:58.278 NVM Subsystem Reset: Not Supported 00:23:58.278 Command Sets Supported 00:23:58.278 NVM Command Set: Supported 00:23:58.278 Boot Partition: Not Supported 00:23:58.278 Memory Page Size Minimum: 4096 bytes 00:23:58.278 Memory Page Size Maximum: 4096 bytes 00:23:58.278 Persistent Memory Region: Not Supported 00:23:58.278 Optional Asynchronous Events Supported 00:23:58.278 Namespace Attribute Notices: Not Supported 00:23:58.278 Firmware Activation Notices: Not Supported 00:23:58.278 ANA Change Notices: Not Supported 00:23:58.278 PLE Aggregate Log Change Notices: Not Supported 00:23:58.278 LBA Status Info Alert Notices: Not Supported 00:23:58.278 EGE Aggregate Log Change Notices: Not Supported 00:23:58.278 Normal NVM Subsystem Shutdown event: Not Supported 00:23:58.278 Zone Descriptor Change Notices: Not Supported 00:23:58.278 Discovery Log Change Notices: Supported 00:23:58.278 Controller Attributes 00:23:58.278 128-bit Host Identifier: Not Supported 00:23:58.278 Non-Operational Permissive Mode: Not Supported 00:23:58.278 NVM Sets: Not Supported 00:23:58.278 Read Recovery Levels: Not Supported 00:23:58.278 Endurance Groups: Not Supported 00:23:58.278 Predictable Latency Mode: Not Supported 00:23:58.278 Traffic Based Keep ALive: Not Supported 00:23:58.278 Namespace Granularity: Not Supported 00:23:58.278 SQ Associations: Not Supported 00:23:58.278 UUID List: Not Supported 00:23:58.278 Multi-Domain Subsystem: Not Supported 00:23:58.278 Fixed Capacity Management: Not Supported 00:23:58.278 Variable Capacity Management: Not Supported 00:23:58.278 Delete Endurance Group: Not Supported 00:23:58.278 Delete NVM Set: Not Supported 00:23:58.278 Extended LBA Formats Supported: Not Supported 00:23:58.278 Flexible Data Placement Supported: Not Supported 00:23:58.278 00:23:58.278 Controller Memory Buffer Support 00:23:58.278 ================================ 00:23:58.278 Supported: No 00:23:58.278 00:23:58.278 Persistent Memory Region Support 00:23:58.278 ================================ 00:23:58.278 Supported: No 00:23:58.278 00:23:58.278 Admin Command Set Attributes 00:23:58.278 ============================ 00:23:58.278 Security Send/Receive: Not Supported 00:23:58.278 Format NVM: Not Supported 00:23:58.278 Firmware Activate/Download: Not Supported 00:23:58.278 Namespace Management: Not Supported 00:23:58.278 Device Self-Test: Not Supported 00:23:58.278 Directives: Not Supported 00:23:58.278 NVMe-MI: Not Supported 00:23:58.278 Virtualization Management: Not Supported 00:23:58.278 Doorbell Buffer Config: Not Supported 00:23:58.278 Get LBA Status Capability: Not Supported 00:23:58.278 Command & Feature Lockdown Capability: Not Supported 00:23:58.278 Abort Command Limit: 1 00:23:58.278 Async Event Request Limit: 1 00:23:58.278 Number of Firmware Slots: N/A 00:23:58.278 Firmware Slot 1 Read-Only: N/A 00:23:58.278 Firmware Activation Without Reset: N/A 00:23:58.278 Multiple Update Detection Support: N/A 00:23:58.278 Firmware Update Granularity: No Information Provided 00:23:58.278 Per-Namespace SMART Log: No 00:23:58.278 Asymmetric Namespace Access Log Page: Not Supported 00:23:58.278 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:58.278 Command Effects Log Page: Not Supported 00:23:58.278 Get Log Page Extended Data: Supported 00:23:58.278 Telemetry Log Pages: Not Supported 00:23:58.278 Persistent Event Log Pages: Not Supported 00:23:58.278 Supported Log Pages Log Page: May Support 00:23:58.278 Commands Supported & Effects Log Page: Not Supported 00:23:58.278 Feature Identifiers & Effects Log Page:May Support 00:23:58.278 NVMe-MI Commands & Effects Log Page: May Support 00:23:58.278 Data Area 4 for Telemetry Log: Not Supported 00:23:58.278 Error Log Page Entries Supported: 1 00:23:58.278 Keep Alive: Not Supported 00:23:58.278 00:23:58.278 NVM Command Set Attributes 00:23:58.278 ========================== 00:23:58.279 Submission Queue Entry Size 00:23:58.279 Max: 1 00:23:58.279 Min: 1 00:23:58.279 Completion Queue Entry Size 00:23:58.279 Max: 1 00:23:58.279 Min: 1 00:23:58.279 Number of Namespaces: 0 00:23:58.279 Compare Command: Not Supported 00:23:58.279 Write Uncorrectable Command: Not Supported 00:23:58.279 Dataset Management Command: Not Supported 00:23:58.279 Write Zeroes Command: Not Supported 00:23:58.279 Set Features Save Field: Not Supported 00:23:58.279 Reservations: Not Supported 00:23:58.279 Timestamp: Not Supported 00:23:58.279 Copy: Not Supported 00:23:58.279 Volatile Write Cache: Not Present 00:23:58.279 Atomic Write Unit (Normal): 1 00:23:58.279 Atomic Write Unit (PFail): 1 00:23:58.279 Atomic Compare & Write Unit: 1 00:23:58.279 Fused Compare & Write: Not Supported 00:23:58.279 Scatter-Gather List 00:23:58.279 SGL Command Set: Supported 00:23:58.279 SGL Keyed: Not Supported 00:23:58.279 SGL Bit Bucket Descriptor: Not Supported 00:23:58.279 SGL Metadata Pointer: Not Supported 00:23:58.279 Oversized SGL: Not Supported 00:23:58.279 SGL Metadata Address: Not Supported 00:23:58.279 SGL Offset: Supported 00:23:58.279 Transport SGL Data Block: Not Supported 00:23:58.279 Replay Protected Memory Block: Not Supported 00:23:58.279 00:23:58.279 Firmware Slot Information 00:23:58.279 ========================= 00:23:58.279 Active slot: 0 00:23:58.279 00:23:58.279 00:23:58.279 Error Log 00:23:58.279 ========= 00:23:58.279 00:23:58.279 Active Namespaces 00:23:58.279 ================= 00:23:58.279 Discovery Log Page 00:23:58.279 ================== 00:23:58.279 Generation Counter: 2 00:23:58.279 Number of Records: 2 00:23:58.279 Record Format: 0 00:23:58.279 00:23:58.279 Discovery Log Entry 0 00:23:58.279 ---------------------- 00:23:58.279 Transport Type: 3 (TCP) 00:23:58.279 Address Family: 1 (IPv4) 00:23:58.279 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:58.279 Entry Flags: 00:23:58.279 Duplicate Returned Information: 0 00:23:58.279 Explicit Persistent Connection Support for Discovery: 0 00:23:58.279 Transport Requirements: 00:23:58.279 Secure Channel: Not Specified 00:23:58.279 Port ID: 1 (0x0001) 00:23:58.279 Controller ID: 65535 (0xffff) 00:23:58.279 Admin Max SQ Size: 32 00:23:58.279 Transport Service Identifier: 4420 00:23:58.279 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:58.279 Transport Address: 10.0.0.1 00:23:58.279 Discovery Log Entry 1 00:23:58.279 ---------------------- 00:23:58.279 Transport Type: 3 (TCP) 00:23:58.279 Address Family: 1 (IPv4) 00:23:58.279 Subsystem Type: 2 (NVM Subsystem) 00:23:58.279 Entry Flags: 00:23:58.279 Duplicate Returned Information: 0 00:23:58.279 Explicit Persistent Connection Support for Discovery: 0 00:23:58.279 Transport Requirements: 00:23:58.279 Secure Channel: Not Specified 00:23:58.279 Port ID: 1 (0x0001) 00:23:58.279 Controller ID: 65535 (0xffff) 00:23:58.279 Admin Max SQ Size: 32 00:23:58.279 Transport Service Identifier: 4420 00:23:58.279 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:23:58.279 Transport Address: 10.0.0.1 00:23:58.279 15:00:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:58.279 get_feature(0x01) failed 00:23:58.279 get_feature(0x02) failed 00:23:58.279 get_feature(0x04) failed 00:23:58.279 ===================================================== 00:23:58.279 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:23:58.279 ===================================================== 00:23:58.279 Controller Capabilities/Features 00:23:58.279 ================================ 00:23:58.279 Vendor ID: 0000 00:23:58.279 Subsystem Vendor ID: 0000 00:23:58.279 Serial Number: 748a41340f2707c1aa22 00:23:58.279 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:23:58.279 Firmware Version: 6.8.9-20 00:23:58.279 Recommended Arb Burst: 6 00:23:58.279 IEEE OUI Identifier: 00 00 00 00:23:58.279 Multi-path I/O 00:23:58.279 May have multiple subsystem ports: Yes 00:23:58.279 May have multiple controllers: Yes 00:23:58.279 Associated with SR-IOV VF: No 00:23:58.279 Max Data Transfer Size: Unlimited 00:23:58.279 Max Number of Namespaces: 1024 00:23:58.279 Max Number of I/O Queues: 128 00:23:58.279 NVMe Specification Version (VS): 1.3 00:23:58.279 NVMe Specification Version (Identify): 1.3 00:23:58.279 Maximum Queue Entries: 1024 00:23:58.279 Contiguous Queues Required: No 00:23:58.279 Arbitration Mechanisms Supported 00:23:58.279 Weighted Round Robin: Not Supported 00:23:58.279 Vendor Specific: Not Supported 00:23:58.279 Reset Timeout: 7500 ms 00:23:58.279 Doorbell Stride: 4 bytes 00:23:58.279 NVM Subsystem Reset: Not Supported 00:23:58.279 Command Sets Supported 00:23:58.279 NVM Command Set: Supported 00:23:58.279 Boot Partition: Not Supported 00:23:58.279 Memory Page Size Minimum: 4096 bytes 00:23:58.279 Memory Page Size Maximum: 4096 bytes 00:23:58.279 Persistent Memory Region: Not Supported 00:23:58.279 Optional Asynchronous Events Supported 00:23:58.279 Namespace Attribute Notices: Supported 00:23:58.279 Firmware Activation Notices: Not Supported 00:23:58.279 ANA Change Notices: Supported 00:23:58.279 PLE Aggregate Log Change Notices: Not Supported 00:23:58.279 LBA Status Info Alert Notices: Not Supported 00:23:58.279 EGE Aggregate Log Change Notices: Not Supported 00:23:58.279 Normal NVM Subsystem Shutdown event: Not Supported 00:23:58.279 Zone Descriptor Change Notices: Not Supported 00:23:58.279 Discovery Log Change Notices: Not Supported 00:23:58.279 Controller Attributes 00:23:58.279 128-bit Host Identifier: Supported 00:23:58.279 Non-Operational Permissive Mode: Not Supported 00:23:58.279 NVM Sets: Not Supported 00:23:58.279 Read Recovery Levels: Not Supported 00:23:58.279 Endurance Groups: Not Supported 00:23:58.279 Predictable Latency Mode: Not Supported 00:23:58.279 Traffic Based Keep ALive: Supported 00:23:58.279 Namespace Granularity: Not Supported 00:23:58.279 SQ Associations: Not Supported 00:23:58.279 UUID List: Not Supported 00:23:58.279 Multi-Domain Subsystem: Not Supported 00:23:58.279 Fixed Capacity Management: Not Supported 00:23:58.279 Variable Capacity Management: Not Supported 00:23:58.279 Delete Endurance Group: Not Supported 00:23:58.279 Delete NVM Set: Not Supported 00:23:58.279 Extended LBA Formats Supported: Not Supported 00:23:58.279 Flexible Data Placement Supported: Not Supported 00:23:58.279 00:23:58.279 Controller Memory Buffer Support 00:23:58.279 ================================ 00:23:58.279 Supported: No 00:23:58.279 00:23:58.279 Persistent Memory Region Support 00:23:58.279 ================================ 00:23:58.279 Supported: No 00:23:58.279 00:23:58.279 Admin Command Set Attributes 00:23:58.279 ============================ 00:23:58.279 Security Send/Receive: Not Supported 00:23:58.279 Format NVM: Not Supported 00:23:58.279 Firmware Activate/Download: Not Supported 00:23:58.279 Namespace Management: Not Supported 00:23:58.279 Device Self-Test: Not Supported 00:23:58.279 Directives: Not Supported 00:23:58.279 NVMe-MI: Not Supported 00:23:58.279 Virtualization Management: Not Supported 00:23:58.279 Doorbell Buffer Config: Not Supported 00:23:58.279 Get LBA Status Capability: Not Supported 00:23:58.279 Command & Feature Lockdown Capability: Not Supported 00:23:58.279 Abort Command Limit: 4 00:23:58.279 Async Event Request Limit: 4 00:23:58.279 Number of Firmware Slots: N/A 00:23:58.279 Firmware Slot 1 Read-Only: N/A 00:23:58.279 Firmware Activation Without Reset: N/A 00:23:58.279 Multiple Update Detection Support: N/A 00:23:58.279 Firmware Update Granularity: No Information Provided 00:23:58.279 Per-Namespace SMART Log: Yes 00:23:58.279 Asymmetric Namespace Access Log Page: Supported 00:23:58.279 ANA Transition Time : 10 sec 00:23:58.279 00:23:58.279 Asymmetric Namespace Access Capabilities 00:23:58.279 ANA Optimized State : Supported 00:23:58.279 ANA Non-Optimized State : Supported 00:23:58.279 ANA Inaccessible State : Supported 00:23:58.279 ANA Persistent Loss State : Supported 00:23:58.279 ANA Change State : Supported 00:23:58.279 ANAGRPID is not changed : No 00:23:58.279 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:23:58.279 00:23:58.279 ANA Group Identifier Maximum : 128 00:23:58.279 Number of ANA Group Identifiers : 128 00:23:58.279 Max Number of Allowed Namespaces : 1024 00:23:58.279 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:23:58.279 Command Effects Log Page: Supported 00:23:58.279 Get Log Page Extended Data: Supported 00:23:58.279 Telemetry Log Pages: Not Supported 00:23:58.279 Persistent Event Log Pages: Not Supported 00:23:58.279 Supported Log Pages Log Page: May Support 00:23:58.279 Commands Supported & Effects Log Page: Not Supported 00:23:58.279 Feature Identifiers & Effects Log Page:May Support 00:23:58.279 NVMe-MI Commands & Effects Log Page: May Support 00:23:58.279 Data Area 4 for Telemetry Log: Not Supported 00:23:58.279 Error Log Page Entries Supported: 128 00:23:58.279 Keep Alive: Supported 00:23:58.279 Keep Alive Granularity: 1000 ms 00:23:58.279 00:23:58.280 NVM Command Set Attributes 00:23:58.280 ========================== 00:23:58.280 Submission Queue Entry Size 00:23:58.280 Max: 64 00:23:58.280 Min: 64 00:23:58.280 Completion Queue Entry Size 00:23:58.280 Max: 16 00:23:58.280 Min: 16 00:23:58.280 Number of Namespaces: 1024 00:23:58.280 Compare Command: Not Supported 00:23:58.280 Write Uncorrectable Command: Not Supported 00:23:58.280 Dataset Management Command: Supported 00:23:58.280 Write Zeroes Command: Supported 00:23:58.280 Set Features Save Field: Not Supported 00:23:58.280 Reservations: Not Supported 00:23:58.280 Timestamp: Not Supported 00:23:58.280 Copy: Not Supported 00:23:58.280 Volatile Write Cache: Present 00:23:58.280 Atomic Write Unit (Normal): 1 00:23:58.280 Atomic Write Unit (PFail): 1 00:23:58.280 Atomic Compare & Write Unit: 1 00:23:58.280 Fused Compare & Write: Not Supported 00:23:58.280 Scatter-Gather List 00:23:58.280 SGL Command Set: Supported 00:23:58.280 SGL Keyed: Not Supported 00:23:58.280 SGL Bit Bucket Descriptor: Not Supported 00:23:58.280 SGL Metadata Pointer: Not Supported 00:23:58.280 Oversized SGL: Not Supported 00:23:58.280 SGL Metadata Address: Not Supported 00:23:58.280 SGL Offset: Supported 00:23:58.280 Transport SGL Data Block: Not Supported 00:23:58.280 Replay Protected Memory Block: Not Supported 00:23:58.280 00:23:58.280 Firmware Slot Information 00:23:58.280 ========================= 00:23:58.280 Active slot: 0 00:23:58.280 00:23:58.280 Asymmetric Namespace Access 00:23:58.280 =========================== 00:23:58.280 Change Count : 0 00:23:58.280 Number of ANA Group Descriptors : 1 00:23:58.280 ANA Group Descriptor : 0 00:23:58.280 ANA Group ID : 1 00:23:58.280 Number of NSID Values : 1 00:23:58.280 Change Count : 0 00:23:58.280 ANA State : 1 00:23:58.280 Namespace Identifier : 1 00:23:58.280 00:23:58.280 Commands Supported and Effects 00:23:58.280 ============================== 00:23:58.280 Admin Commands 00:23:58.280 -------------- 00:23:58.280 Get Log Page (02h): Supported 00:23:58.280 Identify (06h): Supported 00:23:58.280 Abort (08h): Supported 00:23:58.280 Set Features (09h): Supported 00:23:58.280 Get Features (0Ah): Supported 00:23:58.280 Asynchronous Event Request (0Ch): Supported 00:23:58.280 Keep Alive (18h): Supported 00:23:58.280 I/O Commands 00:23:58.280 ------------ 00:23:58.280 Flush (00h): Supported 00:23:58.280 Write (01h): Supported LBA-Change 00:23:58.280 Read (02h): Supported 00:23:58.280 Write Zeroes (08h): Supported LBA-Change 00:23:58.280 Dataset Management (09h): Supported 00:23:58.280 00:23:58.280 Error Log 00:23:58.280 ========= 00:23:58.280 Entry: 0 00:23:58.280 Error Count: 0x3 00:23:58.280 Submission Queue Id: 0x0 00:23:58.280 Command Id: 0x5 00:23:58.280 Phase Bit: 0 00:23:58.280 Status Code: 0x2 00:23:58.280 Status Code Type: 0x0 00:23:58.280 Do Not Retry: 1 00:23:58.280 Error Location: 0x28 00:23:58.280 LBA: 0x0 00:23:58.280 Namespace: 0x0 00:23:58.280 Vendor Log Page: 0x0 00:23:58.280 ----------- 00:23:58.280 Entry: 1 00:23:58.280 Error Count: 0x2 00:23:58.280 Submission Queue Id: 0x0 00:23:58.280 Command Id: 0x5 00:23:58.280 Phase Bit: 0 00:23:58.280 Status Code: 0x2 00:23:58.280 Status Code Type: 0x0 00:23:58.280 Do Not Retry: 1 00:23:58.280 Error Location: 0x28 00:23:58.280 LBA: 0x0 00:23:58.280 Namespace: 0x0 00:23:58.280 Vendor Log Page: 0x0 00:23:58.280 ----------- 00:23:58.280 Entry: 2 00:23:58.280 Error Count: 0x1 00:23:58.280 Submission Queue Id: 0x0 00:23:58.280 Command Id: 0x4 00:23:58.280 Phase Bit: 0 00:23:58.280 Status Code: 0x2 00:23:58.280 Status Code Type: 0x0 00:23:58.280 Do Not Retry: 1 00:23:58.280 Error Location: 0x28 00:23:58.280 LBA: 0x0 00:23:58.280 Namespace: 0x0 00:23:58.280 Vendor Log Page: 0x0 00:23:58.280 00:23:58.280 Number of Queues 00:23:58.280 ================ 00:23:58.280 Number of I/O Submission Queues: 128 00:23:58.280 Number of I/O Completion Queues: 128 00:23:58.280 00:23:58.280 ZNS Specific Controller Data 00:23:58.280 ============================ 00:23:58.280 Zone Append Size Limit: 0 00:23:58.280 00:23:58.280 00:23:58.280 Active Namespaces 00:23:58.280 ================= 00:23:58.280 get_feature(0x05) failed 00:23:58.280 Namespace ID:1 00:23:58.280 Command Set Identifier: NVM (00h) 00:23:58.280 Deallocate: Supported 00:23:58.280 Deallocated/Unwritten Error: Not Supported 00:23:58.280 Deallocated Read Value: Unknown 00:23:58.280 Deallocate in Write Zeroes: Not Supported 00:23:58.280 Deallocated Guard Field: 0xFFFF 00:23:58.280 Flush: Supported 00:23:58.280 Reservation: Not Supported 00:23:58.280 Namespace Sharing Capabilities: Multiple Controllers 00:23:58.280 Size (in LBAs): 1953525168 (931GiB) 00:23:58.280 Capacity (in LBAs): 1953525168 (931GiB) 00:23:58.280 Utilization (in LBAs): 1953525168 (931GiB) 00:23:58.280 UUID: 7a6fb7dd-72fc-4820-a45e-7c69ceda41b8 00:23:58.280 Thin Provisioning: Not Supported 00:23:58.280 Per-NS Atomic Units: Yes 00:23:58.280 Atomic Boundary Size (Normal): 0 00:23:58.280 Atomic Boundary Size (PFail): 0 00:23:58.280 Atomic Boundary Offset: 0 00:23:58.280 NGUID/EUI64 Never Reused: No 00:23:58.280 ANA group ID: 1 00:23:58.280 Namespace Write Protected: No 00:23:58.280 Number of LBA Formats: 1 00:23:58.280 Current LBA Format: LBA Format #00 00:23:58.280 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:58.280 00:23:58.280 15:00:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:23:58.280 15:00:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:58.280 15:00:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:23:58.280 15:00:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:58.280 15:00:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:23:58.280 15:00:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:58.280 15:00:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:58.280 rmmod nvme_tcp 00:23:58.280 rmmod nvme_fabrics 00:23:58.280 15:00:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:58.280 15:00:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:23:58.280 15:00:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:23:58.280 15:00:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:23:58.280 15:00:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:58.280 15:00:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:58.280 15:00:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:58.280 15:00:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:23:58.280 15:00:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:23:58.280 15:00:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:58.280 15:00:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:23:58.280 15:00:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:58.280 15:00:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:58.280 15:00:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:58.280 15:00:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:58.280 15:00:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:00.815 15:00:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:00.815 15:00:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:24:00.815 15:00:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:24:00.815 15:00:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:24:00.815 15:00:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:00.815 15:00:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:00.815 15:00:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:00.815 15:00:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:00.815 15:00:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:24:00.815 15:00:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:24:00.815 15:00:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:01.751 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:24:01.751 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:24:01.751 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:24:01.751 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:24:01.751 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:24:01.751 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:24:01.751 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:24:01.751 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:24:01.751 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:24:01.751 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:24:01.751 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:24:01.751 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:24:01.751 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:24:01.751 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:24:01.751 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:24:01.751 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:24:02.689 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:24:02.689 00:24:02.689 real 0m9.977s 00:24:02.689 user 0m2.261s 00:24:02.689 sys 0m3.699s 00:24:02.689 15:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:02.689 15:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:24:02.689 ************************************ 00:24:02.689 END TEST nvmf_identify_kernel_target 00:24:02.689 ************************************ 00:24:02.948 15:00:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:24:02.948 15:00:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:02.948 15:00:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:02.948 15:00:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.948 ************************************ 00:24:02.948 START TEST nvmf_auth_host 00:24:02.948 ************************************ 00:24:02.948 15:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:24:02.948 * Looking for test storage... 00:24:02.948 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:02.948 15:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:02.948 15:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:24:02.948 15:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:02.948 15:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:02.948 15:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:02.948 15:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:02.948 15:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:02.948 15:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:24:02.948 15:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:24:02.948 15:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:24:02.948 15:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:24:02.948 15:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:24:02.948 15:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:24:02.948 15:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:24:02.948 15:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:02.948 15:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:24:02.948 15:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:24:02.948 15:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:02.948 15:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:02.948 15:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:24:02.948 15:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:24:02.948 15:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:02.948 15:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:24:02.948 15:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:24:02.948 15:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:24:02.948 15:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:24:02.948 15:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:02.948 15:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:24:02.948 15:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:24:02.948 15:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:02.948 15:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:02.948 15:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:24:02.948 15:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:02.948 15:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:02.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:02.948 --rc genhtml_branch_coverage=1 00:24:02.948 --rc genhtml_function_coverage=1 00:24:02.948 --rc genhtml_legend=1 00:24:02.948 --rc geninfo_all_blocks=1 00:24:02.948 --rc geninfo_unexecuted_blocks=1 00:24:02.948 00:24:02.948 ' 00:24:02.948 15:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:02.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:02.948 --rc genhtml_branch_coverage=1 00:24:02.948 --rc genhtml_function_coverage=1 00:24:02.948 --rc genhtml_legend=1 00:24:02.948 --rc geninfo_all_blocks=1 00:24:02.948 --rc geninfo_unexecuted_blocks=1 00:24:02.948 00:24:02.948 ' 00:24:02.948 15:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:02.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:02.948 --rc genhtml_branch_coverage=1 00:24:02.948 --rc genhtml_function_coverage=1 00:24:02.948 --rc genhtml_legend=1 00:24:02.948 --rc geninfo_all_blocks=1 00:24:02.948 --rc geninfo_unexecuted_blocks=1 00:24:02.948 00:24:02.948 ' 00:24:02.948 15:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:02.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:02.948 --rc genhtml_branch_coverage=1 00:24:02.948 --rc genhtml_function_coverage=1 00:24:02.948 --rc genhtml_legend=1 00:24:02.948 --rc geninfo_all_blocks=1 00:24:02.948 --rc geninfo_unexecuted_blocks=1 00:24:02.948 00:24:02.948 ' 00:24:02.948 15:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:02.948 15:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:24:02.948 15:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:02.948 15:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:02.948 15:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:02.948 15:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:02.948 15:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:02.948 15:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:02.948 15:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:02.948 15:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:02.948 15:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:02.948 15:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:02.948 15:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:02.948 15:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:02.948 15:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:02.948 15:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:02.948 15:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:02.948 15:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:02.948 15:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:02.948 15:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:02.948 15:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:02.948 15:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:02.948 15:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:02.948 15:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.948 15:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.948 15:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.948 15:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:24:02.948 15:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.948 15:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:24:02.948 15:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:02.948 15:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:02.948 15:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:02.948 15:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:02.948 15:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:02.948 15:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:02.948 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:02.948 15:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:02.948 15:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:02.948 15:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:02.949 15:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:24:02.949 15:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:24:02.949 15:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:24:02.949 15:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:24:02.949 15:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:02.949 15:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:02.949 15:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:24:02.949 15:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:24:02.949 15:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:24:02.949 15:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:02.949 15:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:02.949 15:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:02.949 15:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:02.949 15:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:02.949 15:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:02.949 15:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:02.949 15:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:02.949 15:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:02.949 15:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:02.949 15:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:24:02.949 15:00:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.483 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:05.483 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:24:05.483 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:05.483 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:05.483 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:05.483 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:05.483 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:05.483 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:24:05.483 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:05.483 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:24:05.483 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:24:05.483 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:24:05.483 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:24:05.483 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:24:05.483 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:24:05.483 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:05.483 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:05.483 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:05.483 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:05.483 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:05.483 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:05.483 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:05.483 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:05.483 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:05.483 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:05.483 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:05.483 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:05.483 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:05.483 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:05.483 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:05.483 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:05.483 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:05.483 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:05.483 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:05.483 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:05.483 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:05.483 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:05.483 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:05.483 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:05.483 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:05.483 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:05.483 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:05.483 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:05.483 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:05.483 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:05.483 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:05.483 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:05.483 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:05.483 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:05.483 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:05.483 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:05.483 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:05.483 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:05.483 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:05.483 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:05.483 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:05.483 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:05.483 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:05.483 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:05.483 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:05.483 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:05.483 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:05.483 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:05.483 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:05.483 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:05.483 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:05.483 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:05.483 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:05.483 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:05.483 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:05.483 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:05.483 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:05.483 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:05.483 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:24:05.483 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:05.483 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:05.483 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:05.483 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:05.483 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:05.483 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:05.483 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:05.483 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:05.483 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:05.483 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:05.483 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:05.483 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:05.483 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:05.483 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:05.483 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:05.483 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:05.483 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:05.483 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:05.483 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:05.483 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:05.483 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:05.483 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:05.483 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:05.483 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:05.483 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:05.483 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:05.483 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:05.483 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:24:05.483 00:24:05.484 --- 10.0.0.2 ping statistics --- 00:24:05.484 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:05.484 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:24:05.484 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:05.484 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:05.484 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:24:05.484 00:24:05.484 --- 10.0.0.1 ping statistics --- 00:24:05.484 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:05.484 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:24:05.484 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:05.484 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:24:05.484 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:05.484 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:05.484 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:05.484 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:05.484 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:05.484 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:05.484 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:05.484 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:24:05.484 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:05.484 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:05.484 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.484 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=763866 00:24:05.484 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:24:05.484 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 763866 00:24:05.484 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 763866 ']' 00:24:05.484 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:05.484 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:05.484 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:05.484 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:05.484 15:00:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.484 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:05.484 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:24:05.484 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:05.484 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:05.484 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.484 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:05.484 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:24:05.484 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:24:05.484 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:05.484 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:05.484 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:05.484 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:24:05.484 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:24:05.484 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:05.484 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=343f8bae755ea9e981181e9804386263 00:24:05.484 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:24:05.484 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Q5l 00:24:05.484 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 343f8bae755ea9e981181e9804386263 0 00:24:05.484 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 343f8bae755ea9e981181e9804386263 0 00:24:05.484 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:05.484 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:05.484 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=343f8bae755ea9e981181e9804386263 00:24:05.484 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:24:05.484 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:05.742 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Q5l 00:24:05.742 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Q5l 00:24:05.742 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.Q5l 00:24:05.742 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:24:05.742 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:05.742 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:05.742 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:05.742 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:24:05.742 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:24:05.742 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:24:05.742 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=afae9d965a54721ffa754161473e87aae2c271d09ef1f5d90cc05289a0a167a3 00:24:05.742 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:24:05.742 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.tqJ 00:24:05.742 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key afae9d965a54721ffa754161473e87aae2c271d09ef1f5d90cc05289a0a167a3 3 00:24:05.742 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 afae9d965a54721ffa754161473e87aae2c271d09ef1f5d90cc05289a0a167a3 3 00:24:05.742 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:05.742 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:05.742 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=afae9d965a54721ffa754161473e87aae2c271d09ef1f5d90cc05289a0a167a3 00:24:05.742 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:24:05.742 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:05.742 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.tqJ 00:24:05.742 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.tqJ 00:24:05.742 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.tqJ 00:24:05.742 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:24:05.742 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:05.742 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:05.742 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:05.742 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:24:05.742 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:24:05.742 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:05.742 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=fa8ab8be0978b079f86db3f967659176cf461a209f5fdf88 00:24:05.742 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:24:05.742 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.yP3 00:24:05.742 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key fa8ab8be0978b079f86db3f967659176cf461a209f5fdf88 0 00:24:05.742 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 fa8ab8be0978b079f86db3f967659176cf461a209f5fdf88 0 00:24:05.742 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:05.742 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:05.742 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=fa8ab8be0978b079f86db3f967659176cf461a209f5fdf88 00:24:05.742 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:24:05.742 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:05.742 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.yP3 00:24:05.742 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.yP3 00:24:05.742 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.yP3 00:24:05.742 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:24:05.742 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:05.742 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:05.742 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:05.742 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:24:05.742 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:24:05.742 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:05.742 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=4eeaf74c8861a1f185fbbb6a7d50fa5e49b29549099bd087 00:24:05.742 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:24:05.742 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.BSm 00:24:05.742 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 4eeaf74c8861a1f185fbbb6a7d50fa5e49b29549099bd087 2 00:24:05.742 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 4eeaf74c8861a1f185fbbb6a7d50fa5e49b29549099bd087 2 00:24:05.742 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:05.742 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:05.742 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=4eeaf74c8861a1f185fbbb6a7d50fa5e49b29549099bd087 00:24:05.742 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:24:05.742 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:05.742 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.BSm 00:24:05.742 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.BSm 00:24:05.742 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.BSm 00:24:05.742 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:24:05.742 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:05.742 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:05.742 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:05.742 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:24:05.742 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:24:05.742 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:05.742 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b6c62bee2b80f49270e790be206f3a88 00:24:05.742 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:24:05.742 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.gXs 00:24:05.742 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b6c62bee2b80f49270e790be206f3a88 1 00:24:05.743 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b6c62bee2b80f49270e790be206f3a88 1 00:24:05.743 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:05.743 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:05.743 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b6c62bee2b80f49270e790be206f3a88 00:24:05.743 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:24:05.743 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:06.001 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.gXs 00:24:06.001 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.gXs 00:24:06.001 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.gXs 00:24:06.001 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:24:06.001 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:06.001 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:06.001 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:06.001 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:24:06.001 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:24:06.001 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:06.001 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=01ad2cb389a7477009eb8452793fbd61 00:24:06.001 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:24:06.001 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.GlH 00:24:06.001 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 01ad2cb389a7477009eb8452793fbd61 1 00:24:06.001 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 01ad2cb389a7477009eb8452793fbd61 1 00:24:06.001 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:06.001 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:06.001 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=01ad2cb389a7477009eb8452793fbd61 00:24:06.001 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:24:06.001 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:06.001 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.GlH 00:24:06.001 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.GlH 00:24:06.001 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.GlH 00:24:06.001 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:24:06.001 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:06.001 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:06.001 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:06.001 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:24:06.001 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:24:06.001 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:06.001 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=a80b44afdb22d7fe420151b1ca3ed2b3b477f4dfc5157343 00:24:06.001 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:24:06.001 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.kif 00:24:06.001 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key a80b44afdb22d7fe420151b1ca3ed2b3b477f4dfc5157343 2 00:24:06.001 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 a80b44afdb22d7fe420151b1ca3ed2b3b477f4dfc5157343 2 00:24:06.001 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:06.001 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:06.001 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=a80b44afdb22d7fe420151b1ca3ed2b3b477f4dfc5157343 00:24:06.001 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:24:06.001 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:06.001 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.kif 00:24:06.001 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.kif 00:24:06.001 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.kif 00:24:06.001 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:24:06.001 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:06.001 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:06.001 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:06.001 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:24:06.001 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:24:06.001 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:06.001 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=efd0002ec1f7473a881f530b764bdb06 00:24:06.001 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:24:06.001 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.WmL 00:24:06.001 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key efd0002ec1f7473a881f530b764bdb06 0 00:24:06.001 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 efd0002ec1f7473a881f530b764bdb06 0 00:24:06.001 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:06.001 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:06.001 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=efd0002ec1f7473a881f530b764bdb06 00:24:06.001 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:24:06.001 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:06.001 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.WmL 00:24:06.001 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.WmL 00:24:06.001 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.WmL 00:24:06.001 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:24:06.001 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:06.001 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:06.001 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:06.001 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:24:06.001 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:24:06.001 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:24:06.001 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=3f5c0c27c7a3e7b6257598dd654454104c42e14d5b052b3813aa2c00187c1588 00:24:06.001 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:24:06.001 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Z1S 00:24:06.001 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 3f5c0c27c7a3e7b6257598dd654454104c42e14d5b052b3813aa2c00187c1588 3 00:24:06.001 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 3f5c0c27c7a3e7b6257598dd654454104c42e14d5b052b3813aa2c00187c1588 3 00:24:06.002 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:06.002 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:06.002 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=3f5c0c27c7a3e7b6257598dd654454104c42e14d5b052b3813aa2c00187c1588 00:24:06.002 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:24:06.002 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:06.002 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Z1S 00:24:06.002 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Z1S 00:24:06.002 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.Z1S 00:24:06.002 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:24:06.002 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 763866 00:24:06.002 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 763866 ']' 00:24:06.002 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:06.002 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:06.002 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:06.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:06.002 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:06.002 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.260 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:06.260 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:24:06.260 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:06.260 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Q5l 00:24:06.260 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.260 15:00:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.260 15:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.260 15:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.tqJ ]] 00:24:06.260 15:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.tqJ 00:24:06.260 15:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.260 15:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.260 15:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.260 15:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:06.260 15:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.yP3 00:24:06.260 15:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.260 15:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.260 15:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.260 15:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.BSm ]] 00:24:06.260 15:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.BSm 00:24:06.260 15:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.260 15:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.260 15:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.260 15:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:06.260 15:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.gXs 00:24:06.260 15:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.260 15:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.518 15:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.518 15:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.GlH ]] 00:24:06.518 15:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.GlH 00:24:06.518 15:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.518 15:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.518 15:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.518 15:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:06.518 15:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.kif 00:24:06.518 15:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.518 15:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.518 15:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.518 15:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.WmL ]] 00:24:06.518 15:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.WmL 00:24:06.518 15:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.518 15:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.518 15:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.518 15:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:06.518 15:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.Z1S 00:24:06.518 15:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.518 15:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.518 15:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.518 15:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:24:06.518 15:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:24:06.518 15:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:24:06.518 15:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:06.518 15:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:06.518 15:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:06.518 15:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:06.518 15:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:06.518 15:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:06.518 15:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:06.518 15:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:06.518 15:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:06.518 15:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:06.518 15:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:24:06.518 15:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:24:06.518 15:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:24:06.518 15:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:06.518 15:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:06.518 15:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:06.518 15:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:24:06.518 15:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:24:06.518 15:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:24:06.518 15:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:06.518 15:00:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:24:07.452 Waiting for block devices as requested 00:24:07.452 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:24:07.452 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:24:07.710 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:24:07.710 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:24:07.710 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:24:07.967 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:24:07.967 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:24:07.967 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:24:07.967 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:24:08.225 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:24:08.225 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:24:08.225 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:24:08.225 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:24:08.483 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:24:08.483 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:24:08.483 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:24:08.483 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:24:09.049 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:24:09.049 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:09.049 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:24:09.049 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:24:09.049 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:09.049 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:24:09.049 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:24:09.049 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:24:09.049 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:24:09.049 No valid GPT data, bailing 00:24:09.049 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:09.049 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:24:09.049 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:24:09.049 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:24:09.049 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:24:09.049 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:09.049 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:09.049 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:09.049 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:24:09.049 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:24:09.049 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:24:09.049 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:24:09.049 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:24:09.049 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:24:09.049 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:24:09.049 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:24:09.049 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:09.049 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:24:09.049 00:24:09.049 Discovery Log Number of Records 2, Generation counter 2 00:24:09.049 =====Discovery Log Entry 0====== 00:24:09.049 trtype: tcp 00:24:09.049 adrfam: ipv4 00:24:09.049 subtype: current discovery subsystem 00:24:09.049 treq: not specified, sq flow control disable supported 00:24:09.049 portid: 1 00:24:09.049 trsvcid: 4420 00:24:09.049 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:09.050 traddr: 10.0.0.1 00:24:09.050 eflags: none 00:24:09.050 sectype: none 00:24:09.050 =====Discovery Log Entry 1====== 00:24:09.050 trtype: tcp 00:24:09.050 adrfam: ipv4 00:24:09.050 subtype: nvme subsystem 00:24:09.050 treq: not specified, sq flow control disable supported 00:24:09.050 portid: 1 00:24:09.050 trsvcid: 4420 00:24:09.050 subnqn: nqn.2024-02.io.spdk:cnode0 00:24:09.050 traddr: 10.0.0.1 00:24:09.050 eflags: none 00:24:09.050 sectype: none 00:24:09.050 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:09.050 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:24:09.050 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:24:09.050 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:09.050 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:09.050 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:09.050 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:09.050 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:09.050 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmE4YWI4YmUwOTc4YjA3OWY4NmRiM2Y5Njc2NTkxNzZjZjQ2MWEyMDlmNWZkZjg4uRzKQQ==: 00:24:09.050 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGVlYWY3NGM4ODYxYTFmMTg1ZmJiYjZhN2Q1MGZhNWU0OWIyOTU0OTA5OWJkMDg3bSpTUg==: 00:24:09.050 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:09.050 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:09.050 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmE4YWI4YmUwOTc4YjA3OWY4NmRiM2Y5Njc2NTkxNzZjZjQ2MWEyMDlmNWZkZjg4uRzKQQ==: 00:24:09.050 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGVlYWY3NGM4ODYxYTFmMTg1ZmJiYjZhN2Q1MGZhNWU0OWIyOTU0OTA5OWJkMDg3bSpTUg==: ]] 00:24:09.050 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGVlYWY3NGM4ODYxYTFmMTg1ZmJiYjZhN2Q1MGZhNWU0OWIyOTU0OTA5OWJkMDg3bSpTUg==: 00:24:09.050 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:24:09.050 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:24:09.050 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:24:09.050 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:09.050 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:24:09.050 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:09.050 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:24:09.050 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:09.050 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:09.050 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:09.050 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:09.050 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.050 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.050 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.050 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:09.050 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:09.050 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:09.050 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:09.050 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:09.050 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:09.050 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:09.050 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:09.050 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:09.050 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:09.050 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:09.050 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:09.050 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.050 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.308 nvme0n1 00:24:09.308 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.308 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:09.308 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.308 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.308 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:09.308 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.308 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:09.308 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:09.308 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.308 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.308 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.308 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:09.308 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:09.308 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:09.308 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:24:09.308 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:09.308 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:09.308 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:09.308 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:09.308 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQzZjhiYWU3NTVlYTllOTgxMTgxZTk4MDQzODYyNjNLyKCw: 00:24:09.308 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWZhZTlkOTY1YTU0NzIxZmZhNzU0MTYxNDczZTg3YWFlMmMyNzFkMDllZjFmNWQ5MGNjMDUyODlhMGExNjdhM8n4a6s=: 00:24:09.308 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:09.308 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:09.308 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQzZjhiYWU3NTVlYTllOTgxMTgxZTk4MDQzODYyNjNLyKCw: 00:24:09.308 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWZhZTlkOTY1YTU0NzIxZmZhNzU0MTYxNDczZTg3YWFlMmMyNzFkMDllZjFmNWQ5MGNjMDUyODlhMGExNjdhM8n4a6s=: ]] 00:24:09.308 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWZhZTlkOTY1YTU0NzIxZmZhNzU0MTYxNDczZTg3YWFlMmMyNzFkMDllZjFmNWQ5MGNjMDUyODlhMGExNjdhM8n4a6s=: 00:24:09.308 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:24:09.308 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:09.308 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:09.308 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:09.308 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:09.308 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:09.308 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:09.308 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.308 15:00:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.308 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.308 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:09.308 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:09.308 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:09.308 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:09.308 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:09.309 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:09.309 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:09.309 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:09.309 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:09.309 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:09.309 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:09.309 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:09.309 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.309 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.566 nvme0n1 00:24:09.566 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.566 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:09.567 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:09.567 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.567 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.567 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.567 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:09.567 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:09.567 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.567 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.567 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.567 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:09.567 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:09.567 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:09.567 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:09.567 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:09.567 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:09.567 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmE4YWI4YmUwOTc4YjA3OWY4NmRiM2Y5Njc2NTkxNzZjZjQ2MWEyMDlmNWZkZjg4uRzKQQ==: 00:24:09.567 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGVlYWY3NGM4ODYxYTFmMTg1ZmJiYjZhN2Q1MGZhNWU0OWIyOTU0OTA5OWJkMDg3bSpTUg==: 00:24:09.567 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:09.567 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:09.567 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmE4YWI4YmUwOTc4YjA3OWY4NmRiM2Y5Njc2NTkxNzZjZjQ2MWEyMDlmNWZkZjg4uRzKQQ==: 00:24:09.567 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGVlYWY3NGM4ODYxYTFmMTg1ZmJiYjZhN2Q1MGZhNWU0OWIyOTU0OTA5OWJkMDg3bSpTUg==: ]] 00:24:09.567 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGVlYWY3NGM4ODYxYTFmMTg1ZmJiYjZhN2Q1MGZhNWU0OWIyOTU0OTA5OWJkMDg3bSpTUg==: 00:24:09.567 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:24:09.567 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:09.567 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:09.567 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:09.567 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:09.567 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:09.567 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:09.567 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.567 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.567 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.567 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:09.567 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:09.567 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:09.567 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:09.567 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:09.567 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:09.567 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:09.567 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:09.567 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:09.567 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:09.567 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:09.567 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:09.567 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.567 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.824 nvme0n1 00:24:09.824 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.824 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:09.824 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.824 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.824 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:09.824 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.824 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:09.824 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:09.824 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.824 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.824 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.824 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:09.824 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:24:09.825 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:09.825 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:09.825 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:09.825 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:09.825 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjZjNjJiZWUyYjgwZjQ5MjcwZTc5MGJlMjA2ZjNhODjAEErr: 00:24:09.825 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDFhZDJjYjM4OWE3NDc3MDA5ZWI4NDUyNzkzZmJkNjH65HSp: 00:24:09.825 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:09.825 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:09.825 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjZjNjJiZWUyYjgwZjQ5MjcwZTc5MGJlMjA2ZjNhODjAEErr: 00:24:09.825 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDFhZDJjYjM4OWE3NDc3MDA5ZWI4NDUyNzkzZmJkNjH65HSp: ]] 00:24:09.825 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDFhZDJjYjM4OWE3NDc3MDA5ZWI4NDUyNzkzZmJkNjH65HSp: 00:24:09.825 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:24:09.825 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:09.825 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:09.825 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:09.825 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:09.825 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:09.825 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:09.825 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.825 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.825 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.825 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:09.825 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:09.825 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:09.825 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:09.825 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:09.825 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:09.825 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:09.825 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:09.825 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:09.825 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:09.825 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:09.825 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:09.825 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.825 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.082 nvme0n1 00:24:10.082 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.082 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:10.082 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.082 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.082 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:10.082 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.082 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:10.082 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:10.082 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.082 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.082 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.082 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:10.082 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:24:10.082 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:10.082 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:10.082 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:10.082 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:10.082 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTgwYjQ0YWZkYjIyZDdmZTQyMDE1MWIxY2EzZWQyYjNiNDc3ZjRkZmM1MTU3MzQzlr1rUg==: 00:24:10.082 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWZkMDAwMmVjMWY3NDczYTg4MWY1MzBiNzY0YmRiMDasCq3Y: 00:24:10.082 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:10.082 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:10.082 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTgwYjQ0YWZkYjIyZDdmZTQyMDE1MWIxY2EzZWQyYjNiNDc3ZjRkZmM1MTU3MzQzlr1rUg==: 00:24:10.082 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWZkMDAwMmVjMWY3NDczYTg4MWY1MzBiNzY0YmRiMDasCq3Y: ]] 00:24:10.082 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWZkMDAwMmVjMWY3NDczYTg4MWY1MzBiNzY0YmRiMDasCq3Y: 00:24:10.082 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:24:10.082 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:10.082 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:10.082 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:10.082 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:10.082 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:10.082 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:10.082 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.082 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.083 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.083 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:10.083 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:10.083 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:10.083 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:10.083 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:10.083 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:10.083 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:10.083 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:10.083 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:10.083 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:10.083 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:10.083 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:10.083 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.083 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.083 nvme0n1 00:24:10.083 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.083 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:10.083 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:10.083 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.083 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.341 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.341 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:10.341 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:10.341 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.341 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.341 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.341 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:10.341 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:24:10.341 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:10.341 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:10.341 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:10.341 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:10.341 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2Y1YzBjMjdjN2EzZTdiNjI1NzU5OGRkNjU0NDU0MTA0YzQyZTE0ZDViMDUyYjM4MTNhYTJjMDAxODdjMTU4OFjfOGQ=: 00:24:10.341 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:10.341 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:10.341 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:10.341 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2Y1YzBjMjdjN2EzZTdiNjI1NzU5OGRkNjU0NDU0MTA0YzQyZTE0ZDViMDUyYjM4MTNhYTJjMDAxODdjMTU4OFjfOGQ=: 00:24:10.341 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:10.341 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:24:10.341 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:10.341 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:10.341 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:10.341 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:10.341 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:10.341 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:10.341 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.341 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.341 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.341 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:10.341 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:10.341 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:10.341 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:10.341 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:10.341 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:10.341 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:10.341 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:10.341 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:10.341 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:10.341 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:10.341 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:10.341 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.341 15:00:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.341 nvme0n1 00:24:10.341 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.341 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:10.341 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.341 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:10.341 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.341 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.599 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:10.599 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:10.599 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.599 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.599 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.599 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:10.599 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:10.599 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:24:10.599 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:10.599 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:10.599 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:10.599 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:10.599 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQzZjhiYWU3NTVlYTllOTgxMTgxZTk4MDQzODYyNjNLyKCw: 00:24:10.599 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWZhZTlkOTY1YTU0NzIxZmZhNzU0MTYxNDczZTg3YWFlMmMyNzFkMDllZjFmNWQ5MGNjMDUyODlhMGExNjdhM8n4a6s=: 00:24:10.599 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:10.599 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:10.599 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQzZjhiYWU3NTVlYTllOTgxMTgxZTk4MDQzODYyNjNLyKCw: 00:24:10.599 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWZhZTlkOTY1YTU0NzIxZmZhNzU0MTYxNDczZTg3YWFlMmMyNzFkMDllZjFmNWQ5MGNjMDUyODlhMGExNjdhM8n4a6s=: ]] 00:24:10.599 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWZhZTlkOTY1YTU0NzIxZmZhNzU0MTYxNDczZTg3YWFlMmMyNzFkMDllZjFmNWQ5MGNjMDUyODlhMGExNjdhM8n4a6s=: 00:24:10.599 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:24:10.600 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:10.600 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:10.600 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:10.600 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:10.600 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:10.600 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:10.600 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.600 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.600 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.600 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:10.600 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:10.600 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:10.600 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:10.600 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:10.600 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:10.600 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:10.600 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:10.600 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:10.600 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:10.600 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:10.600 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:10.600 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.600 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.600 nvme0n1 00:24:10.600 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.600 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:10.600 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:10.600 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.600 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.600 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.858 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:10.858 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:10.858 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.858 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.858 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.858 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:10.858 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:24:10.858 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:10.858 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:10.858 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:10.858 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:10.858 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmE4YWI4YmUwOTc4YjA3OWY4NmRiM2Y5Njc2NTkxNzZjZjQ2MWEyMDlmNWZkZjg4uRzKQQ==: 00:24:10.858 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGVlYWY3NGM4ODYxYTFmMTg1ZmJiYjZhN2Q1MGZhNWU0OWIyOTU0OTA5OWJkMDg3bSpTUg==: 00:24:10.858 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:10.858 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:10.858 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmE4YWI4YmUwOTc4YjA3OWY4NmRiM2Y5Njc2NTkxNzZjZjQ2MWEyMDlmNWZkZjg4uRzKQQ==: 00:24:10.858 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGVlYWY3NGM4ODYxYTFmMTg1ZmJiYjZhN2Q1MGZhNWU0OWIyOTU0OTA5OWJkMDg3bSpTUg==: ]] 00:24:10.858 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGVlYWY3NGM4ODYxYTFmMTg1ZmJiYjZhN2Q1MGZhNWU0OWIyOTU0OTA5OWJkMDg3bSpTUg==: 00:24:10.858 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:24:10.858 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:10.858 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:10.858 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:10.858 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:10.858 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:10.858 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:10.858 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.858 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.858 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.858 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:10.858 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:10.858 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:10.858 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:10.858 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:10.858 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:10.858 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:10.858 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:10.858 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:10.858 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:10.858 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:10.858 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:10.858 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.858 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.858 nvme0n1 00:24:10.858 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.858 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:10.858 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.858 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.858 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:10.858 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.116 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:11.116 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:11.116 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.116 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.116 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.116 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:11.116 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:24:11.116 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:11.116 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:11.116 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:11.116 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:11.116 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjZjNjJiZWUyYjgwZjQ5MjcwZTc5MGJlMjA2ZjNhODjAEErr: 00:24:11.116 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDFhZDJjYjM4OWE3NDc3MDA5ZWI4NDUyNzkzZmJkNjH65HSp: 00:24:11.116 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:11.116 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:11.116 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjZjNjJiZWUyYjgwZjQ5MjcwZTc5MGJlMjA2ZjNhODjAEErr: 00:24:11.116 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDFhZDJjYjM4OWE3NDc3MDA5ZWI4NDUyNzkzZmJkNjH65HSp: ]] 00:24:11.116 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDFhZDJjYjM4OWE3NDc3MDA5ZWI4NDUyNzkzZmJkNjH65HSp: 00:24:11.116 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:24:11.116 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:11.116 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:11.116 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:11.116 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:11.116 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:11.117 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:11.117 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.117 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.117 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.117 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:11.117 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:11.117 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:11.117 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:11.117 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:11.117 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:11.117 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:11.117 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:11.117 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:11.117 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:11.117 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:11.117 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:11.117 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.117 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.117 nvme0n1 00:24:11.117 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.117 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:11.117 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.117 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.117 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:11.117 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.375 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:11.375 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:11.375 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.375 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.375 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.375 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:11.375 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:24:11.375 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:11.375 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:11.375 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:11.375 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:11.375 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTgwYjQ0YWZkYjIyZDdmZTQyMDE1MWIxY2EzZWQyYjNiNDc3ZjRkZmM1MTU3MzQzlr1rUg==: 00:24:11.375 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWZkMDAwMmVjMWY3NDczYTg4MWY1MzBiNzY0YmRiMDasCq3Y: 00:24:11.375 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:11.375 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:11.375 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTgwYjQ0YWZkYjIyZDdmZTQyMDE1MWIxY2EzZWQyYjNiNDc3ZjRkZmM1MTU3MzQzlr1rUg==: 00:24:11.375 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWZkMDAwMmVjMWY3NDczYTg4MWY1MzBiNzY0YmRiMDasCq3Y: ]] 00:24:11.375 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWZkMDAwMmVjMWY3NDczYTg4MWY1MzBiNzY0YmRiMDasCq3Y: 00:24:11.375 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:24:11.375 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:11.375 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:11.375 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:11.375 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:11.375 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:11.375 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:11.375 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.375 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.375 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.375 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:11.375 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:11.375 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:11.375 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:11.375 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:11.375 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:11.375 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:11.375 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:11.375 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:11.375 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:11.375 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:11.375 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:11.375 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.375 15:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.375 nvme0n1 00:24:11.375 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.375 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:11.375 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.375 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.375 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:11.375 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.375 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:11.375 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:11.375 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.375 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.632 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.632 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:11.632 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:24:11.632 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:11.632 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:11.632 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:11.632 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:11.632 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2Y1YzBjMjdjN2EzZTdiNjI1NzU5OGRkNjU0NDU0MTA0YzQyZTE0ZDViMDUyYjM4MTNhYTJjMDAxODdjMTU4OFjfOGQ=: 00:24:11.632 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:11.632 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:11.632 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:11.632 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2Y1YzBjMjdjN2EzZTdiNjI1NzU5OGRkNjU0NDU0MTA0YzQyZTE0ZDViMDUyYjM4MTNhYTJjMDAxODdjMTU4OFjfOGQ=: 00:24:11.632 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:11.632 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:24:11.632 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:11.632 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:11.632 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:11.632 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:11.632 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:11.632 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:11.632 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.632 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.632 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.632 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:11.632 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:11.632 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:11.632 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:11.632 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:11.632 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:11.632 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:11.632 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:11.632 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:11.632 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:11.632 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:11.632 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:11.632 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.632 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.632 nvme0n1 00:24:11.632 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.632 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:11.632 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.632 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:11.632 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.632 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.632 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:11.632 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:11.632 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.632 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.890 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.890 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:11.890 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:11.890 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:24:11.890 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:11.890 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:11.890 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:11.890 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:11.890 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQzZjhiYWU3NTVlYTllOTgxMTgxZTk4MDQzODYyNjNLyKCw: 00:24:11.890 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWZhZTlkOTY1YTU0NzIxZmZhNzU0MTYxNDczZTg3YWFlMmMyNzFkMDllZjFmNWQ5MGNjMDUyODlhMGExNjdhM8n4a6s=: 00:24:11.890 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:11.890 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:11.890 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQzZjhiYWU3NTVlYTllOTgxMTgxZTk4MDQzODYyNjNLyKCw: 00:24:11.890 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWZhZTlkOTY1YTU0NzIxZmZhNzU0MTYxNDczZTg3YWFlMmMyNzFkMDllZjFmNWQ5MGNjMDUyODlhMGExNjdhM8n4a6s=: ]] 00:24:11.890 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWZhZTlkOTY1YTU0NzIxZmZhNzU0MTYxNDczZTg3YWFlMmMyNzFkMDllZjFmNWQ5MGNjMDUyODlhMGExNjdhM8n4a6s=: 00:24:11.890 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:24:11.890 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:11.890 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:11.890 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:11.890 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:11.890 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:11.890 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:11.890 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.890 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.890 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.890 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:11.890 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:11.890 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:11.890 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:11.890 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:11.890 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:11.890 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:11.890 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:11.890 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:11.890 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:11.890 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:11.890 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:11.890 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.890 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.148 nvme0n1 00:24:12.148 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.148 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:12.148 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.148 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.148 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:12.148 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.148 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:12.148 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:12.148 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.148 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.148 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.148 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:12.148 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:24:12.148 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:12.148 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:12.148 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:12.148 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:12.148 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmE4YWI4YmUwOTc4YjA3OWY4NmRiM2Y5Njc2NTkxNzZjZjQ2MWEyMDlmNWZkZjg4uRzKQQ==: 00:24:12.148 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGVlYWY3NGM4ODYxYTFmMTg1ZmJiYjZhN2Q1MGZhNWU0OWIyOTU0OTA5OWJkMDg3bSpTUg==: 00:24:12.148 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:12.148 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:12.148 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmE4YWI4YmUwOTc4YjA3OWY4NmRiM2Y5Njc2NTkxNzZjZjQ2MWEyMDlmNWZkZjg4uRzKQQ==: 00:24:12.148 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGVlYWY3NGM4ODYxYTFmMTg1ZmJiYjZhN2Q1MGZhNWU0OWIyOTU0OTA5OWJkMDg3bSpTUg==: ]] 00:24:12.148 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGVlYWY3NGM4ODYxYTFmMTg1ZmJiYjZhN2Q1MGZhNWU0OWIyOTU0OTA5OWJkMDg3bSpTUg==: 00:24:12.148 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:24:12.148 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:12.148 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:12.148 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:12.148 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:12.148 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:12.148 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:12.148 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.148 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.148 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.148 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:12.148 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:12.148 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:12.148 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:12.148 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:12.148 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:12.148 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:12.148 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:12.148 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:12.148 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:12.148 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:12.148 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:12.148 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.148 15:00:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.406 nvme0n1 00:24:12.406 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.406 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:12.406 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.406 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:12.406 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.406 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.406 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:12.406 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:12.406 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.406 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.406 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.406 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:12.406 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:24:12.406 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:12.406 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:12.406 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:12.406 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:12.406 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjZjNjJiZWUyYjgwZjQ5MjcwZTc5MGJlMjA2ZjNhODjAEErr: 00:24:12.406 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDFhZDJjYjM4OWE3NDc3MDA5ZWI4NDUyNzkzZmJkNjH65HSp: 00:24:12.406 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:12.406 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:12.406 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjZjNjJiZWUyYjgwZjQ5MjcwZTc5MGJlMjA2ZjNhODjAEErr: 00:24:12.406 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDFhZDJjYjM4OWE3NDc3MDA5ZWI4NDUyNzkzZmJkNjH65HSp: ]] 00:24:12.406 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDFhZDJjYjM4OWE3NDc3MDA5ZWI4NDUyNzkzZmJkNjH65HSp: 00:24:12.406 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:24:12.406 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:12.406 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:12.406 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:12.406 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:12.406 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:12.406 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:12.406 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.406 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.406 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.406 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:12.406 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:12.406 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:12.406 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:12.406 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:12.406 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:12.406 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:12.406 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:12.406 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:12.406 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:12.406 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:12.406 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:12.406 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.406 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.663 nvme0n1 00:24:12.663 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.663 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:12.663 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.663 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:12.663 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.663 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.663 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:12.663 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:12.663 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.663 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.663 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.663 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:12.663 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:24:12.663 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:12.663 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:12.663 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:12.663 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:12.663 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTgwYjQ0YWZkYjIyZDdmZTQyMDE1MWIxY2EzZWQyYjNiNDc3ZjRkZmM1MTU3MzQzlr1rUg==: 00:24:12.663 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWZkMDAwMmVjMWY3NDczYTg4MWY1MzBiNzY0YmRiMDasCq3Y: 00:24:12.663 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:12.663 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:12.663 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTgwYjQ0YWZkYjIyZDdmZTQyMDE1MWIxY2EzZWQyYjNiNDc3ZjRkZmM1MTU3MzQzlr1rUg==: 00:24:12.663 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWZkMDAwMmVjMWY3NDczYTg4MWY1MzBiNzY0YmRiMDasCq3Y: ]] 00:24:12.663 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWZkMDAwMmVjMWY3NDczYTg4MWY1MzBiNzY0YmRiMDasCq3Y: 00:24:12.663 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:24:12.663 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:12.663 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:12.663 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:12.663 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:12.663 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:12.663 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:12.663 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.663 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.663 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.664 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:12.664 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:12.664 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:12.664 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:12.664 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:12.664 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:12.664 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:12.664 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:12.664 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:12.664 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:12.664 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:12.664 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:12.664 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.664 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.921 nvme0n1 00:24:12.921 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.921 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:12.921 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.921 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.921 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:12.921 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.179 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:13.179 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:13.179 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.179 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.179 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.179 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:13.179 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:24:13.179 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:13.179 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:13.179 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:13.179 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:13.179 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2Y1YzBjMjdjN2EzZTdiNjI1NzU5OGRkNjU0NDU0MTA0YzQyZTE0ZDViMDUyYjM4MTNhYTJjMDAxODdjMTU4OFjfOGQ=: 00:24:13.179 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:13.179 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:13.179 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:13.179 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2Y1YzBjMjdjN2EzZTdiNjI1NzU5OGRkNjU0NDU0MTA0YzQyZTE0ZDViMDUyYjM4MTNhYTJjMDAxODdjMTU4OFjfOGQ=: 00:24:13.179 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:13.179 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:24:13.179 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:13.179 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:13.179 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:13.179 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:13.179 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:13.179 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:13.179 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.179 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.179 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.179 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:13.179 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:13.179 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:13.179 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:13.179 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:13.179 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:13.179 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:13.179 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:13.179 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:13.179 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:13.179 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:13.179 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:13.179 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.179 15:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.473 nvme0n1 00:24:13.473 15:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.473 15:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:13.473 15:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.473 15:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.473 15:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:13.473 15:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.473 15:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:13.473 15:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:13.473 15:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.473 15:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.473 15:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.473 15:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:13.473 15:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:13.473 15:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:24:13.473 15:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:13.473 15:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:13.473 15:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:13.473 15:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:13.473 15:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQzZjhiYWU3NTVlYTllOTgxMTgxZTk4MDQzODYyNjNLyKCw: 00:24:13.473 15:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWZhZTlkOTY1YTU0NzIxZmZhNzU0MTYxNDczZTg3YWFlMmMyNzFkMDllZjFmNWQ5MGNjMDUyODlhMGExNjdhM8n4a6s=: 00:24:13.474 15:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:13.474 15:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:13.474 15:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQzZjhiYWU3NTVlYTllOTgxMTgxZTk4MDQzODYyNjNLyKCw: 00:24:13.474 15:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWZhZTlkOTY1YTU0NzIxZmZhNzU0MTYxNDczZTg3YWFlMmMyNzFkMDllZjFmNWQ5MGNjMDUyODlhMGExNjdhM8n4a6s=: ]] 00:24:13.474 15:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWZhZTlkOTY1YTU0NzIxZmZhNzU0MTYxNDczZTg3YWFlMmMyNzFkMDllZjFmNWQ5MGNjMDUyODlhMGExNjdhM8n4a6s=: 00:24:13.474 15:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:24:13.474 15:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:13.474 15:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:13.474 15:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:13.474 15:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:13.474 15:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:13.474 15:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:13.474 15:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.474 15:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.474 15:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.474 15:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:13.474 15:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:13.474 15:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:13.474 15:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:13.474 15:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:13.474 15:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:13.474 15:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:13.474 15:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:13.474 15:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:13.474 15:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:13.474 15:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:13.474 15:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:13.474 15:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.474 15:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.108 nvme0n1 00:24:14.108 15:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.108 15:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:14.108 15:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:14.108 15:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.108 15:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.108 15:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.108 15:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:14.108 15:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:14.108 15:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.108 15:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.108 15:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.108 15:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:14.108 15:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:24:14.108 15:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:14.108 15:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:14.108 15:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:14.108 15:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:14.108 15:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmE4YWI4YmUwOTc4YjA3OWY4NmRiM2Y5Njc2NTkxNzZjZjQ2MWEyMDlmNWZkZjg4uRzKQQ==: 00:24:14.108 15:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGVlYWY3NGM4ODYxYTFmMTg1ZmJiYjZhN2Q1MGZhNWU0OWIyOTU0OTA5OWJkMDg3bSpTUg==: 00:24:14.108 15:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:14.108 15:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:14.108 15:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmE4YWI4YmUwOTc4YjA3OWY4NmRiM2Y5Njc2NTkxNzZjZjQ2MWEyMDlmNWZkZjg4uRzKQQ==: 00:24:14.108 15:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGVlYWY3NGM4ODYxYTFmMTg1ZmJiYjZhN2Q1MGZhNWU0OWIyOTU0OTA5OWJkMDg3bSpTUg==: ]] 00:24:14.108 15:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGVlYWY3NGM4ODYxYTFmMTg1ZmJiYjZhN2Q1MGZhNWU0OWIyOTU0OTA5OWJkMDg3bSpTUg==: 00:24:14.108 15:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:24:14.108 15:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:14.108 15:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:14.108 15:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:14.108 15:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:14.108 15:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:14.108 15:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:14.108 15:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.108 15:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.108 15:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.108 15:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:14.108 15:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:14.108 15:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:14.108 15:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:14.108 15:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:14.108 15:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:14.108 15:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:14.108 15:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:14.108 15:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:14.108 15:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:14.108 15:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:14.108 15:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:14.108 15:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.108 15:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.366 nvme0n1 00:24:14.366 15:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.366 15:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:14.366 15:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:14.366 15:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.366 15:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.366 15:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.624 15:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:14.624 15:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:14.624 15:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.624 15:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.624 15:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.624 15:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:14.624 15:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:24:14.624 15:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:14.624 15:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:14.624 15:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:14.624 15:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:14.625 15:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjZjNjJiZWUyYjgwZjQ5MjcwZTc5MGJlMjA2ZjNhODjAEErr: 00:24:14.625 15:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDFhZDJjYjM4OWE3NDc3MDA5ZWI4NDUyNzkzZmJkNjH65HSp: 00:24:14.625 15:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:14.625 15:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:14.625 15:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjZjNjJiZWUyYjgwZjQ5MjcwZTc5MGJlMjA2ZjNhODjAEErr: 00:24:14.625 15:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDFhZDJjYjM4OWE3NDc3MDA5ZWI4NDUyNzkzZmJkNjH65HSp: ]] 00:24:14.625 15:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDFhZDJjYjM4OWE3NDc3MDA5ZWI4NDUyNzkzZmJkNjH65HSp: 00:24:14.625 15:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:24:14.625 15:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:14.625 15:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:14.625 15:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:14.625 15:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:14.625 15:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:14.625 15:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:14.625 15:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.625 15:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.625 15:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.625 15:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:14.625 15:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:14.625 15:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:14.625 15:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:14.625 15:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:14.625 15:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:14.625 15:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:14.625 15:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:14.625 15:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:14.625 15:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:14.625 15:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:14.625 15:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:14.625 15:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.625 15:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.191 nvme0n1 00:24:15.191 15:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.191 15:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:15.191 15:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.191 15:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.191 15:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:15.191 15:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.191 15:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:15.191 15:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:15.191 15:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.191 15:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.191 15:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.191 15:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:15.191 15:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:24:15.191 15:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:15.191 15:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:15.191 15:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:15.191 15:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:15.191 15:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTgwYjQ0YWZkYjIyZDdmZTQyMDE1MWIxY2EzZWQyYjNiNDc3ZjRkZmM1MTU3MzQzlr1rUg==: 00:24:15.191 15:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWZkMDAwMmVjMWY3NDczYTg4MWY1MzBiNzY0YmRiMDasCq3Y: 00:24:15.191 15:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:15.191 15:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:15.191 15:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTgwYjQ0YWZkYjIyZDdmZTQyMDE1MWIxY2EzZWQyYjNiNDc3ZjRkZmM1MTU3MzQzlr1rUg==: 00:24:15.191 15:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWZkMDAwMmVjMWY3NDczYTg4MWY1MzBiNzY0YmRiMDasCq3Y: ]] 00:24:15.191 15:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWZkMDAwMmVjMWY3NDczYTg4MWY1MzBiNzY0YmRiMDasCq3Y: 00:24:15.191 15:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:24:15.191 15:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:15.191 15:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:15.191 15:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:15.191 15:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:15.191 15:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:15.191 15:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:15.191 15:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.191 15:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.191 15:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.191 15:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:15.191 15:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:15.191 15:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:15.191 15:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:15.191 15:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:15.191 15:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:15.191 15:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:15.191 15:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:15.191 15:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:15.191 15:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:15.191 15:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:15.191 15:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:15.191 15:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.191 15:00:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.449 nvme0n1 00:24:15.449 15:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.449 15:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:15.449 15:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.449 15:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.449 15:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:15.449 15:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.707 15:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:15.707 15:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:15.707 15:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.707 15:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.707 15:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.707 15:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:15.707 15:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:24:15.707 15:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:15.707 15:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:15.707 15:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:15.707 15:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:15.707 15:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2Y1YzBjMjdjN2EzZTdiNjI1NzU5OGRkNjU0NDU0MTA0YzQyZTE0ZDViMDUyYjM4MTNhYTJjMDAxODdjMTU4OFjfOGQ=: 00:24:15.707 15:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:15.707 15:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:15.707 15:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:15.707 15:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2Y1YzBjMjdjN2EzZTdiNjI1NzU5OGRkNjU0NDU0MTA0YzQyZTE0ZDViMDUyYjM4MTNhYTJjMDAxODdjMTU4OFjfOGQ=: 00:24:15.707 15:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:15.707 15:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:24:15.707 15:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:15.707 15:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:15.707 15:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:15.707 15:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:15.707 15:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:15.707 15:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:15.707 15:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.707 15:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.707 15:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.707 15:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:15.707 15:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:15.707 15:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:15.707 15:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:15.707 15:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:15.707 15:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:15.707 15:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:15.707 15:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:15.707 15:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:15.707 15:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:15.707 15:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:15.707 15:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:15.707 15:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.707 15:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.273 nvme0n1 00:24:16.273 15:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.273 15:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:16.273 15:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.273 15:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.273 15:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:16.273 15:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.273 15:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:16.273 15:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:16.273 15:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.273 15:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.273 15:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.273 15:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:16.273 15:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:16.273 15:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:24:16.273 15:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:16.273 15:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:16.273 15:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:16.273 15:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:16.273 15:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQzZjhiYWU3NTVlYTllOTgxMTgxZTk4MDQzODYyNjNLyKCw: 00:24:16.273 15:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWZhZTlkOTY1YTU0NzIxZmZhNzU0MTYxNDczZTg3YWFlMmMyNzFkMDllZjFmNWQ5MGNjMDUyODlhMGExNjdhM8n4a6s=: 00:24:16.273 15:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:16.273 15:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:16.273 15:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQzZjhiYWU3NTVlYTllOTgxMTgxZTk4MDQzODYyNjNLyKCw: 00:24:16.273 15:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWZhZTlkOTY1YTU0NzIxZmZhNzU0MTYxNDczZTg3YWFlMmMyNzFkMDllZjFmNWQ5MGNjMDUyODlhMGExNjdhM8n4a6s=: ]] 00:24:16.273 15:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWZhZTlkOTY1YTU0NzIxZmZhNzU0MTYxNDczZTg3YWFlMmMyNzFkMDllZjFmNWQ5MGNjMDUyODlhMGExNjdhM8n4a6s=: 00:24:16.274 15:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:24:16.274 15:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:16.274 15:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:16.274 15:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:16.274 15:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:16.274 15:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:16.274 15:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:16.274 15:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.274 15:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.274 15:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.274 15:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:16.274 15:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:16.274 15:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:16.274 15:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:16.274 15:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:16.274 15:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:16.274 15:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:16.274 15:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:16.274 15:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:16.274 15:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:16.274 15:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:16.274 15:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:16.274 15:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.274 15:00:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.207 nvme0n1 00:24:17.207 15:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.207 15:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:17.207 15:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.207 15:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.207 15:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:17.207 15:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.207 15:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:17.207 15:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:17.207 15:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.207 15:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.207 15:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.207 15:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:17.207 15:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:24:17.207 15:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:17.207 15:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:17.207 15:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:17.207 15:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:17.207 15:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmE4YWI4YmUwOTc4YjA3OWY4NmRiM2Y5Njc2NTkxNzZjZjQ2MWEyMDlmNWZkZjg4uRzKQQ==: 00:24:17.207 15:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGVlYWY3NGM4ODYxYTFmMTg1ZmJiYjZhN2Q1MGZhNWU0OWIyOTU0OTA5OWJkMDg3bSpTUg==: 00:24:17.207 15:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:17.207 15:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:17.207 15:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmE4YWI4YmUwOTc4YjA3OWY4NmRiM2Y5Njc2NTkxNzZjZjQ2MWEyMDlmNWZkZjg4uRzKQQ==: 00:24:17.207 15:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGVlYWY3NGM4ODYxYTFmMTg1ZmJiYjZhN2Q1MGZhNWU0OWIyOTU0OTA5OWJkMDg3bSpTUg==: ]] 00:24:17.207 15:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGVlYWY3NGM4ODYxYTFmMTg1ZmJiYjZhN2Q1MGZhNWU0OWIyOTU0OTA5OWJkMDg3bSpTUg==: 00:24:17.207 15:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:24:17.207 15:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:17.207 15:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:17.207 15:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:17.207 15:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:17.207 15:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:17.207 15:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:17.207 15:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.207 15:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.207 15:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.207 15:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:17.207 15:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:17.207 15:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:17.207 15:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:17.207 15:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:17.207 15:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:17.207 15:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:17.207 15:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:17.207 15:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:17.207 15:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:17.207 15:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:17.207 15:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:17.207 15:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.207 15:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.141 nvme0n1 00:24:18.141 15:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.141 15:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:18.141 15:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.141 15:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:18.141 15:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.141 15:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.141 15:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:18.141 15:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:18.141 15:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.141 15:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.141 15:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.141 15:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:18.141 15:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:24:18.141 15:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:18.141 15:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:18.141 15:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:18.141 15:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:18.141 15:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjZjNjJiZWUyYjgwZjQ5MjcwZTc5MGJlMjA2ZjNhODjAEErr: 00:24:18.141 15:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDFhZDJjYjM4OWE3NDc3MDA5ZWI4NDUyNzkzZmJkNjH65HSp: 00:24:18.141 15:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:18.141 15:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:18.141 15:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjZjNjJiZWUyYjgwZjQ5MjcwZTc5MGJlMjA2ZjNhODjAEErr: 00:24:18.141 15:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDFhZDJjYjM4OWE3NDc3MDA5ZWI4NDUyNzkzZmJkNjH65HSp: ]] 00:24:18.141 15:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDFhZDJjYjM4OWE3NDc3MDA5ZWI4NDUyNzkzZmJkNjH65HSp: 00:24:18.141 15:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:24:18.141 15:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:18.141 15:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:18.141 15:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:18.141 15:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:18.141 15:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:18.141 15:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:18.141 15:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.141 15:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.141 15:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.141 15:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:18.141 15:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:18.141 15:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:18.141 15:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:18.141 15:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:18.141 15:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:18.141 15:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:18.141 15:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:18.141 15:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:18.141 15:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:18.141 15:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:18.141 15:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:18.141 15:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.141 15:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.085 nvme0n1 00:24:19.085 15:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.085 15:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:19.085 15:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.085 15:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.085 15:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:19.085 15:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.085 15:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:19.085 15:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:19.085 15:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.085 15:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.085 15:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.085 15:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:19.085 15:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:24:19.085 15:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:19.085 15:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:19.085 15:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:19.085 15:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:19.085 15:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTgwYjQ0YWZkYjIyZDdmZTQyMDE1MWIxY2EzZWQyYjNiNDc3ZjRkZmM1MTU3MzQzlr1rUg==: 00:24:19.085 15:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWZkMDAwMmVjMWY3NDczYTg4MWY1MzBiNzY0YmRiMDasCq3Y: 00:24:19.085 15:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:19.085 15:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:19.085 15:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTgwYjQ0YWZkYjIyZDdmZTQyMDE1MWIxY2EzZWQyYjNiNDc3ZjRkZmM1MTU3MzQzlr1rUg==: 00:24:19.085 15:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWZkMDAwMmVjMWY3NDczYTg4MWY1MzBiNzY0YmRiMDasCq3Y: ]] 00:24:19.085 15:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWZkMDAwMmVjMWY3NDczYTg4MWY1MzBiNzY0YmRiMDasCq3Y: 00:24:19.085 15:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:24:19.085 15:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:19.085 15:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:19.085 15:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:19.085 15:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:19.085 15:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:19.085 15:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:19.085 15:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.085 15:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.085 15:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.085 15:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:19.085 15:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:19.085 15:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:19.085 15:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:19.085 15:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:19.085 15:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:19.085 15:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:19.085 15:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:19.085 15:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:19.085 15:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:19.085 15:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:19.085 15:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:19.085 15:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.085 15:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.651 nvme0n1 00:24:19.910 15:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.910 15:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:19.910 15:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:19.910 15:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.910 15:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.910 15:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.910 15:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:19.910 15:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:19.910 15:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.910 15:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.910 15:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.910 15:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:19.910 15:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:24:19.910 15:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:19.910 15:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:19.910 15:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:19.910 15:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:19.910 15:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2Y1YzBjMjdjN2EzZTdiNjI1NzU5OGRkNjU0NDU0MTA0YzQyZTE0ZDViMDUyYjM4MTNhYTJjMDAxODdjMTU4OFjfOGQ=: 00:24:19.910 15:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:19.910 15:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:19.910 15:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:19.910 15:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2Y1YzBjMjdjN2EzZTdiNjI1NzU5OGRkNjU0NDU0MTA0YzQyZTE0ZDViMDUyYjM4MTNhYTJjMDAxODdjMTU4OFjfOGQ=: 00:24:19.910 15:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:19.910 15:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:24:19.910 15:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:19.910 15:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:19.910 15:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:19.910 15:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:19.910 15:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:19.910 15:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:19.910 15:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.910 15:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.910 15:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.910 15:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:19.910 15:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:19.910 15:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:19.910 15:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:19.910 15:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:19.910 15:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:19.910 15:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:19.910 15:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:19.910 15:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:19.910 15:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:19.910 15:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:19.910 15:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:19.910 15:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.910 15:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.844 nvme0n1 00:24:20.844 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.844 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:20.844 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.844 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.844 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:20.844 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.844 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:20.844 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:20.844 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.844 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.844 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.844 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:20.844 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:20.844 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:20.844 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:24:20.844 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:20.844 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:20.844 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:20.844 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:20.844 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQzZjhiYWU3NTVlYTllOTgxMTgxZTk4MDQzODYyNjNLyKCw: 00:24:20.844 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWZhZTlkOTY1YTU0NzIxZmZhNzU0MTYxNDczZTg3YWFlMmMyNzFkMDllZjFmNWQ5MGNjMDUyODlhMGExNjdhM8n4a6s=: 00:24:20.844 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:20.844 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:20.844 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQzZjhiYWU3NTVlYTllOTgxMTgxZTk4MDQzODYyNjNLyKCw: 00:24:20.844 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWZhZTlkOTY1YTU0NzIxZmZhNzU0MTYxNDczZTg3YWFlMmMyNzFkMDllZjFmNWQ5MGNjMDUyODlhMGExNjdhM8n4a6s=: ]] 00:24:20.844 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWZhZTlkOTY1YTU0NzIxZmZhNzU0MTYxNDczZTg3YWFlMmMyNzFkMDllZjFmNWQ5MGNjMDUyODlhMGExNjdhM8n4a6s=: 00:24:20.844 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:24:20.844 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:20.844 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:20.844 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:20.844 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:20.844 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:20.844 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:20.844 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.844 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.844 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.844 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:20.844 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:20.844 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:20.844 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:20.844 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:20.844 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:20.844 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:20.844 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:20.844 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:20.844 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:20.844 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:20.844 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:20.844 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.844 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.844 nvme0n1 00:24:20.844 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.844 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:20.844 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.844 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.844 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:20.844 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.844 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:20.844 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:20.844 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.844 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.844 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.844 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:20.844 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:24:20.844 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:20.844 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:20.844 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:20.844 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:20.844 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmE4YWI4YmUwOTc4YjA3OWY4NmRiM2Y5Njc2NTkxNzZjZjQ2MWEyMDlmNWZkZjg4uRzKQQ==: 00:24:20.844 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGVlYWY3NGM4ODYxYTFmMTg1ZmJiYjZhN2Q1MGZhNWU0OWIyOTU0OTA5OWJkMDg3bSpTUg==: 00:24:20.844 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:20.844 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:20.844 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmE4YWI4YmUwOTc4YjA3OWY4NmRiM2Y5Njc2NTkxNzZjZjQ2MWEyMDlmNWZkZjg4uRzKQQ==: 00:24:20.844 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGVlYWY3NGM4ODYxYTFmMTg1ZmJiYjZhN2Q1MGZhNWU0OWIyOTU0OTA5OWJkMDg3bSpTUg==: ]] 00:24:20.844 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGVlYWY3NGM4ODYxYTFmMTg1ZmJiYjZhN2Q1MGZhNWU0OWIyOTU0OTA5OWJkMDg3bSpTUg==: 00:24:20.844 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:24:20.844 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:20.844 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:20.844 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:20.844 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:20.844 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:20.844 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:20.844 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.844 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.103 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.103 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:21.103 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:21.103 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:21.103 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:21.103 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:21.103 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:21.103 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:21.103 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:21.103 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:21.103 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:21.103 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:21.103 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:21.103 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.104 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.104 nvme0n1 00:24:21.104 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.104 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:21.104 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.104 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.104 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:21.104 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.104 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:21.104 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:21.104 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.104 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.104 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.104 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:21.104 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:24:21.104 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:21.104 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:21.104 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:21.104 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:21.104 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjZjNjJiZWUyYjgwZjQ5MjcwZTc5MGJlMjA2ZjNhODjAEErr: 00:24:21.104 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDFhZDJjYjM4OWE3NDc3MDA5ZWI4NDUyNzkzZmJkNjH65HSp: 00:24:21.104 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:21.104 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:21.104 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjZjNjJiZWUyYjgwZjQ5MjcwZTc5MGJlMjA2ZjNhODjAEErr: 00:24:21.104 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDFhZDJjYjM4OWE3NDc3MDA5ZWI4NDUyNzkzZmJkNjH65HSp: ]] 00:24:21.104 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDFhZDJjYjM4OWE3NDc3MDA5ZWI4NDUyNzkzZmJkNjH65HSp: 00:24:21.104 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:24:21.104 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:21.104 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:21.104 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:21.104 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:21.104 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:21.104 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:21.104 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.104 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.104 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.104 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:21.104 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:21.104 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:21.104 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:21.104 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:21.104 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:21.104 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:21.104 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:21.104 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:21.104 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:21.104 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:21.104 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:21.104 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.104 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.363 nvme0n1 00:24:21.363 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.363 15:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:21.363 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.363 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:21.363 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.363 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.363 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:21.363 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:21.363 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.363 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.363 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.363 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:21.363 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:24:21.363 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:21.363 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:21.363 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:21.363 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:21.363 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTgwYjQ0YWZkYjIyZDdmZTQyMDE1MWIxY2EzZWQyYjNiNDc3ZjRkZmM1MTU3MzQzlr1rUg==: 00:24:21.363 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWZkMDAwMmVjMWY3NDczYTg4MWY1MzBiNzY0YmRiMDasCq3Y: 00:24:21.363 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:21.363 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:21.363 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTgwYjQ0YWZkYjIyZDdmZTQyMDE1MWIxY2EzZWQyYjNiNDc3ZjRkZmM1MTU3MzQzlr1rUg==: 00:24:21.363 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWZkMDAwMmVjMWY3NDczYTg4MWY1MzBiNzY0YmRiMDasCq3Y: ]] 00:24:21.363 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWZkMDAwMmVjMWY3NDczYTg4MWY1MzBiNzY0YmRiMDasCq3Y: 00:24:21.363 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:24:21.363 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:21.363 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:21.363 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:21.363 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:21.363 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:21.363 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:21.363 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.363 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.363 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.363 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:21.363 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:21.363 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:21.363 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:21.363 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:21.363 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:21.363 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:21.363 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:21.363 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:21.363 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:21.363 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:21.363 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:21.363 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.363 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.621 nvme0n1 00:24:21.621 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.621 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:21.621 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.621 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:21.621 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.621 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.621 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:21.621 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:21.621 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.621 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.621 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.621 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:21.621 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:24:21.621 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:21.621 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:21.621 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:21.621 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:21.622 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2Y1YzBjMjdjN2EzZTdiNjI1NzU5OGRkNjU0NDU0MTA0YzQyZTE0ZDViMDUyYjM4MTNhYTJjMDAxODdjMTU4OFjfOGQ=: 00:24:21.622 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:21.622 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:21.622 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:21.622 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2Y1YzBjMjdjN2EzZTdiNjI1NzU5OGRkNjU0NDU0MTA0YzQyZTE0ZDViMDUyYjM4MTNhYTJjMDAxODdjMTU4OFjfOGQ=: 00:24:21.622 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:21.622 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:24:21.622 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:21.622 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:21.622 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:21.622 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:21.622 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:21.622 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:21.622 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.622 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.622 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.622 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:21.622 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:21.622 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:21.622 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:21.622 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:21.622 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:21.622 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:21.622 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:21.622 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:21.622 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:21.622 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:21.622 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:21.622 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.622 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.880 nvme0n1 00:24:21.880 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.880 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:21.880 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.880 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:21.880 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.880 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.880 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:21.880 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:21.880 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.880 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.880 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.880 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:21.880 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:21.880 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:24:21.880 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:21.880 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:21.880 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:21.880 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:21.880 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQzZjhiYWU3NTVlYTllOTgxMTgxZTk4MDQzODYyNjNLyKCw: 00:24:21.880 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWZhZTlkOTY1YTU0NzIxZmZhNzU0MTYxNDczZTg3YWFlMmMyNzFkMDllZjFmNWQ5MGNjMDUyODlhMGExNjdhM8n4a6s=: 00:24:21.880 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:21.880 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:21.880 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQzZjhiYWU3NTVlYTllOTgxMTgxZTk4MDQzODYyNjNLyKCw: 00:24:21.880 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWZhZTlkOTY1YTU0NzIxZmZhNzU0MTYxNDczZTg3YWFlMmMyNzFkMDllZjFmNWQ5MGNjMDUyODlhMGExNjdhM8n4a6s=: ]] 00:24:21.880 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWZhZTlkOTY1YTU0NzIxZmZhNzU0MTYxNDczZTg3YWFlMmMyNzFkMDllZjFmNWQ5MGNjMDUyODlhMGExNjdhM8n4a6s=: 00:24:21.880 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:24:21.880 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:21.880 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:21.880 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:21.880 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:21.880 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:21.880 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:21.880 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.880 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.880 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.880 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:21.880 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:21.880 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:21.880 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:21.880 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:21.880 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:21.880 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:21.880 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:21.880 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:21.880 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:21.880 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:21.880 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:21.880 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.880 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.139 nvme0n1 00:24:22.139 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.139 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:22.139 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.139 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.139 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:22.139 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.139 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:22.139 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:22.139 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.139 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.139 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.139 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:22.139 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:24:22.139 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:22.139 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:22.139 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:22.139 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:22.139 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmE4YWI4YmUwOTc4YjA3OWY4NmRiM2Y5Njc2NTkxNzZjZjQ2MWEyMDlmNWZkZjg4uRzKQQ==: 00:24:22.139 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGVlYWY3NGM4ODYxYTFmMTg1ZmJiYjZhN2Q1MGZhNWU0OWIyOTU0OTA5OWJkMDg3bSpTUg==: 00:24:22.139 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:22.139 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:22.139 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmE4YWI4YmUwOTc4YjA3OWY4NmRiM2Y5Njc2NTkxNzZjZjQ2MWEyMDlmNWZkZjg4uRzKQQ==: 00:24:22.139 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGVlYWY3NGM4ODYxYTFmMTg1ZmJiYjZhN2Q1MGZhNWU0OWIyOTU0OTA5OWJkMDg3bSpTUg==: ]] 00:24:22.139 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGVlYWY3NGM4ODYxYTFmMTg1ZmJiYjZhN2Q1MGZhNWU0OWIyOTU0OTA5OWJkMDg3bSpTUg==: 00:24:22.139 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:24:22.139 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:22.139 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:22.139 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:22.139 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:22.139 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:22.139 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:22.139 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.139 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.139 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.139 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:22.139 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:22.139 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:22.139 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:22.139 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:22.139 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:22.139 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:22.139 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:22.139 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:22.139 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:22.139 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:22.139 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:22.139 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.139 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.397 nvme0n1 00:24:22.397 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.397 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:22.397 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.397 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.397 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:22.397 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.397 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:22.397 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:22.397 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.397 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.397 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.397 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:22.397 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:24:22.397 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:22.397 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:22.397 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:22.397 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:22.397 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjZjNjJiZWUyYjgwZjQ5MjcwZTc5MGJlMjA2ZjNhODjAEErr: 00:24:22.397 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDFhZDJjYjM4OWE3NDc3MDA5ZWI4NDUyNzkzZmJkNjH65HSp: 00:24:22.397 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:22.397 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:22.397 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjZjNjJiZWUyYjgwZjQ5MjcwZTc5MGJlMjA2ZjNhODjAEErr: 00:24:22.397 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDFhZDJjYjM4OWE3NDc3MDA5ZWI4NDUyNzkzZmJkNjH65HSp: ]] 00:24:22.397 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDFhZDJjYjM4OWE3NDc3MDA5ZWI4NDUyNzkzZmJkNjH65HSp: 00:24:22.397 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:24:22.397 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:22.397 15:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:22.397 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:22.397 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:22.397 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:22.397 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:22.397 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.397 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.397 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.397 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:22.397 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:22.398 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:22.398 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:22.398 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:22.398 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:22.398 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:22.398 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:22.398 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:22.398 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:22.398 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:22.398 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:22.398 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.398 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.655 nvme0n1 00:24:22.655 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.655 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:22.655 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.655 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.655 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:22.655 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.655 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:22.655 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:22.655 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.655 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.655 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.655 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:22.655 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:24:22.655 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:22.655 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:22.655 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:22.655 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:22.655 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTgwYjQ0YWZkYjIyZDdmZTQyMDE1MWIxY2EzZWQyYjNiNDc3ZjRkZmM1MTU3MzQzlr1rUg==: 00:24:22.655 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWZkMDAwMmVjMWY3NDczYTg4MWY1MzBiNzY0YmRiMDasCq3Y: 00:24:22.655 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:22.655 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:22.655 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTgwYjQ0YWZkYjIyZDdmZTQyMDE1MWIxY2EzZWQyYjNiNDc3ZjRkZmM1MTU3MzQzlr1rUg==: 00:24:22.655 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWZkMDAwMmVjMWY3NDczYTg4MWY1MzBiNzY0YmRiMDasCq3Y: ]] 00:24:22.655 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWZkMDAwMmVjMWY3NDczYTg4MWY1MzBiNzY0YmRiMDasCq3Y: 00:24:22.655 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:24:22.655 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:22.655 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:22.655 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:22.655 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:22.655 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:22.655 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:22.655 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.655 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.655 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.655 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:22.655 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:22.655 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:22.655 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:22.655 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:22.655 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:22.655 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:22.655 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:22.655 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:22.656 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:22.656 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:22.656 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:22.656 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.656 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.914 nvme0n1 00:24:22.914 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.914 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:22.914 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.914 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.914 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:22.914 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.914 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:22.914 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:22.914 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.914 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.914 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.914 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:22.914 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:24:22.914 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:22.914 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:22.914 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:22.914 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:22.914 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2Y1YzBjMjdjN2EzZTdiNjI1NzU5OGRkNjU0NDU0MTA0YzQyZTE0ZDViMDUyYjM4MTNhYTJjMDAxODdjMTU4OFjfOGQ=: 00:24:22.914 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:22.914 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:22.914 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:22.914 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2Y1YzBjMjdjN2EzZTdiNjI1NzU5OGRkNjU0NDU0MTA0YzQyZTE0ZDViMDUyYjM4MTNhYTJjMDAxODdjMTU4OFjfOGQ=: 00:24:22.914 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:22.914 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:24:22.914 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:22.914 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:22.914 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:22.914 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:22.914 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:22.914 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:22.914 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.914 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.914 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.914 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:22.914 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:22.914 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:22.914 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:22.914 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:22.914 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:22.914 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:22.914 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:22.914 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:22.914 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:22.914 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:22.914 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:22.914 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.914 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.173 nvme0n1 00:24:23.173 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.173 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:23.173 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.173 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.173 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:23.173 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.173 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:23.173 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:23.173 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.173 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.173 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.173 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:23.173 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:23.173 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:24:23.173 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:23.173 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:23.173 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:23.173 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:23.173 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQzZjhiYWU3NTVlYTllOTgxMTgxZTk4MDQzODYyNjNLyKCw: 00:24:23.173 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWZhZTlkOTY1YTU0NzIxZmZhNzU0MTYxNDczZTg3YWFlMmMyNzFkMDllZjFmNWQ5MGNjMDUyODlhMGExNjdhM8n4a6s=: 00:24:23.173 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:23.173 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:23.173 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQzZjhiYWU3NTVlYTllOTgxMTgxZTk4MDQzODYyNjNLyKCw: 00:24:23.173 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWZhZTlkOTY1YTU0NzIxZmZhNzU0MTYxNDczZTg3YWFlMmMyNzFkMDllZjFmNWQ5MGNjMDUyODlhMGExNjdhM8n4a6s=: ]] 00:24:23.173 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWZhZTlkOTY1YTU0NzIxZmZhNzU0MTYxNDczZTg3YWFlMmMyNzFkMDllZjFmNWQ5MGNjMDUyODlhMGExNjdhM8n4a6s=: 00:24:23.173 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:24:23.173 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:23.173 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:23.173 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:23.173 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:23.173 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:23.173 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:23.173 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.173 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.173 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.173 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:23.173 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:23.173 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:23.173 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:23.173 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:23.173 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:23.173 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:23.173 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:23.173 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:23.173 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:23.173 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:23.173 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:23.173 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.173 15:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.432 nvme0n1 00:24:23.432 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.432 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:23.432 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:23.432 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.432 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.432 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.432 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:23.432 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:23.432 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.432 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.432 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.432 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:23.432 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:24:23.432 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:23.432 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:23.432 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:23.432 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:23.432 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmE4YWI4YmUwOTc4YjA3OWY4NmRiM2Y5Njc2NTkxNzZjZjQ2MWEyMDlmNWZkZjg4uRzKQQ==: 00:24:23.432 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGVlYWY3NGM4ODYxYTFmMTg1ZmJiYjZhN2Q1MGZhNWU0OWIyOTU0OTA5OWJkMDg3bSpTUg==: 00:24:23.432 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:23.432 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:23.432 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmE4YWI4YmUwOTc4YjA3OWY4NmRiM2Y5Njc2NTkxNzZjZjQ2MWEyMDlmNWZkZjg4uRzKQQ==: 00:24:23.432 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGVlYWY3NGM4ODYxYTFmMTg1ZmJiYjZhN2Q1MGZhNWU0OWIyOTU0OTA5OWJkMDg3bSpTUg==: ]] 00:24:23.432 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGVlYWY3NGM4ODYxYTFmMTg1ZmJiYjZhN2Q1MGZhNWU0OWIyOTU0OTA5OWJkMDg3bSpTUg==: 00:24:23.432 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:24:23.432 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:23.432 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:23.432 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:23.432 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:23.432 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:23.432 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:23.432 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.432 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.432 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.432 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:23.432 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:23.432 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:23.432 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:23.432 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:23.432 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:23.432 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:23.432 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:23.432 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:23.432 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:23.432 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:23.432 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:23.432 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.432 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.690 nvme0n1 00:24:23.690 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.690 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:23.690 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.690 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.690 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:23.690 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.690 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:23.690 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:23.690 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.690 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.948 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.948 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:23.948 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:24:23.948 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:23.948 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:23.948 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:23.948 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:23.948 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjZjNjJiZWUyYjgwZjQ5MjcwZTc5MGJlMjA2ZjNhODjAEErr: 00:24:23.948 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDFhZDJjYjM4OWE3NDc3MDA5ZWI4NDUyNzkzZmJkNjH65HSp: 00:24:23.948 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:23.948 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:23.948 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjZjNjJiZWUyYjgwZjQ5MjcwZTc5MGJlMjA2ZjNhODjAEErr: 00:24:23.948 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDFhZDJjYjM4OWE3NDc3MDA5ZWI4NDUyNzkzZmJkNjH65HSp: ]] 00:24:23.948 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDFhZDJjYjM4OWE3NDc3MDA5ZWI4NDUyNzkzZmJkNjH65HSp: 00:24:23.948 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:24:23.948 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:23.948 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:23.948 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:23.948 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:23.948 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:23.948 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:23.948 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.948 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.948 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.948 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:23.949 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:23.949 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:23.949 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:23.949 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:23.949 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:23.949 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:23.949 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:23.949 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:23.949 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:23.949 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:23.949 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:23.949 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.949 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.206 nvme0n1 00:24:24.206 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.207 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:24.207 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.207 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.207 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:24.207 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.207 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:24.207 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:24.207 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.207 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.207 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.207 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:24.207 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:24:24.207 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:24.207 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:24.207 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:24.207 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:24.207 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTgwYjQ0YWZkYjIyZDdmZTQyMDE1MWIxY2EzZWQyYjNiNDc3ZjRkZmM1MTU3MzQzlr1rUg==: 00:24:24.207 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWZkMDAwMmVjMWY3NDczYTg4MWY1MzBiNzY0YmRiMDasCq3Y: 00:24:24.207 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:24.207 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:24.207 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTgwYjQ0YWZkYjIyZDdmZTQyMDE1MWIxY2EzZWQyYjNiNDc3ZjRkZmM1MTU3MzQzlr1rUg==: 00:24:24.207 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWZkMDAwMmVjMWY3NDczYTg4MWY1MzBiNzY0YmRiMDasCq3Y: ]] 00:24:24.207 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWZkMDAwMmVjMWY3NDczYTg4MWY1MzBiNzY0YmRiMDasCq3Y: 00:24:24.207 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:24:24.207 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:24.207 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:24.207 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:24.207 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:24.207 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:24.207 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:24.207 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.207 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.207 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.207 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:24.207 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:24.207 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:24.207 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:24.207 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:24.207 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:24.207 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:24.207 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:24.207 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:24.207 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:24.207 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:24.207 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:24.207 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.207 15:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.465 nvme0n1 00:24:24.465 15:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.465 15:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:24.465 15:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.465 15:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.465 15:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:24.465 15:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.465 15:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:24.465 15:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:24.465 15:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.465 15:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.465 15:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.465 15:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:24.465 15:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:24:24.465 15:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:24.465 15:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:24.465 15:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:24.465 15:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:24.465 15:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2Y1YzBjMjdjN2EzZTdiNjI1NzU5OGRkNjU0NDU0MTA0YzQyZTE0ZDViMDUyYjM4MTNhYTJjMDAxODdjMTU4OFjfOGQ=: 00:24:24.465 15:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:24.465 15:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:24.465 15:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:24.465 15:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2Y1YzBjMjdjN2EzZTdiNjI1NzU5OGRkNjU0NDU0MTA0YzQyZTE0ZDViMDUyYjM4MTNhYTJjMDAxODdjMTU4OFjfOGQ=: 00:24:24.465 15:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:24.465 15:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:24:24.465 15:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:24.465 15:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:24.465 15:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:24.465 15:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:24.465 15:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:24.465 15:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:24.465 15:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.466 15:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.466 15:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.466 15:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:24.466 15:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:24.466 15:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:24.466 15:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:24.466 15:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:24.466 15:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:24.466 15:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:24.466 15:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:24.466 15:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:24.466 15:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:24.466 15:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:24.466 15:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:24.466 15:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.466 15:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.724 nvme0n1 00:24:24.724 15:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.724 15:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:24.724 15:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.724 15:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.724 15:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:24.724 15:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.724 15:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:24.724 15:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:24.724 15:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.724 15:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.724 15:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.724 15:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:24.724 15:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:24.724 15:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:24:24.724 15:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:24.724 15:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:24.724 15:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:24.724 15:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:24.724 15:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQzZjhiYWU3NTVlYTllOTgxMTgxZTk4MDQzODYyNjNLyKCw: 00:24:24.724 15:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWZhZTlkOTY1YTU0NzIxZmZhNzU0MTYxNDczZTg3YWFlMmMyNzFkMDllZjFmNWQ5MGNjMDUyODlhMGExNjdhM8n4a6s=: 00:24:24.724 15:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:24.724 15:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:24.724 15:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQzZjhiYWU3NTVlYTllOTgxMTgxZTk4MDQzODYyNjNLyKCw: 00:24:24.724 15:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWZhZTlkOTY1YTU0NzIxZmZhNzU0MTYxNDczZTg3YWFlMmMyNzFkMDllZjFmNWQ5MGNjMDUyODlhMGExNjdhM8n4a6s=: ]] 00:24:24.724 15:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWZhZTlkOTY1YTU0NzIxZmZhNzU0MTYxNDczZTg3YWFlMmMyNzFkMDllZjFmNWQ5MGNjMDUyODlhMGExNjdhM8n4a6s=: 00:24:24.724 15:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:24:24.724 15:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:24.724 15:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:24.724 15:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:24.724 15:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:24.724 15:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:24.724 15:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:24.724 15:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.724 15:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.982 15:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.982 15:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:24.982 15:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:24.982 15:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:24.982 15:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:24.982 15:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:24.982 15:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:24.982 15:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:24.982 15:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:24.982 15:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:24.982 15:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:24.982 15:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:24.982 15:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:24.982 15:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.982 15:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.239 nvme0n1 00:24:25.239 15:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.239 15:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:25.239 15:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.240 15:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.240 15:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:25.240 15:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.497 15:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:25.497 15:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:25.497 15:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.497 15:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.497 15:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.497 15:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:25.497 15:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:24:25.497 15:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:25.497 15:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:25.497 15:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:25.497 15:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:25.497 15:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmE4YWI4YmUwOTc4YjA3OWY4NmRiM2Y5Njc2NTkxNzZjZjQ2MWEyMDlmNWZkZjg4uRzKQQ==: 00:24:25.497 15:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGVlYWY3NGM4ODYxYTFmMTg1ZmJiYjZhN2Q1MGZhNWU0OWIyOTU0OTA5OWJkMDg3bSpTUg==: 00:24:25.497 15:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:25.497 15:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:25.497 15:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmE4YWI4YmUwOTc4YjA3OWY4NmRiM2Y5Njc2NTkxNzZjZjQ2MWEyMDlmNWZkZjg4uRzKQQ==: 00:24:25.497 15:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGVlYWY3NGM4ODYxYTFmMTg1ZmJiYjZhN2Q1MGZhNWU0OWIyOTU0OTA5OWJkMDg3bSpTUg==: ]] 00:24:25.497 15:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGVlYWY3NGM4ODYxYTFmMTg1ZmJiYjZhN2Q1MGZhNWU0OWIyOTU0OTA5OWJkMDg3bSpTUg==: 00:24:25.497 15:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:24:25.497 15:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:25.497 15:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:25.497 15:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:25.497 15:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:25.497 15:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:25.497 15:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:25.497 15:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.498 15:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.498 15:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.498 15:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:25.498 15:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:25.498 15:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:25.498 15:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:25.498 15:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:25.498 15:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:25.498 15:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:25.498 15:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:25.498 15:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:25.498 15:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:25.498 15:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:25.498 15:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:25.498 15:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.498 15:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.063 nvme0n1 00:24:26.063 15:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.063 15:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:26.063 15:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.063 15:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.063 15:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:26.063 15:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.063 15:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:26.063 15:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:26.063 15:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.063 15:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.063 15:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.063 15:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:26.063 15:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:24:26.063 15:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:26.063 15:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:26.063 15:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:26.063 15:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:26.063 15:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjZjNjJiZWUyYjgwZjQ5MjcwZTc5MGJlMjA2ZjNhODjAEErr: 00:24:26.063 15:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDFhZDJjYjM4OWE3NDc3MDA5ZWI4NDUyNzkzZmJkNjH65HSp: 00:24:26.063 15:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:26.063 15:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:26.063 15:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjZjNjJiZWUyYjgwZjQ5MjcwZTc5MGJlMjA2ZjNhODjAEErr: 00:24:26.063 15:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDFhZDJjYjM4OWE3NDc3MDA5ZWI4NDUyNzkzZmJkNjH65HSp: ]] 00:24:26.063 15:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDFhZDJjYjM4OWE3NDc3MDA5ZWI4NDUyNzkzZmJkNjH65HSp: 00:24:26.063 15:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:24:26.063 15:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:26.063 15:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:26.063 15:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:26.063 15:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:26.063 15:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:26.063 15:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:26.063 15:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.063 15:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.063 15:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.063 15:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:26.063 15:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:26.063 15:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:26.063 15:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:26.063 15:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:26.063 15:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:26.063 15:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:26.063 15:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:26.063 15:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:26.063 15:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:26.063 15:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:26.063 15:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:26.063 15:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.063 15:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.321 nvme0n1 00:24:26.321 15:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.321 15:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:26.321 15:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.321 15:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.321 15:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:26.321 15:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.579 15:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:26.579 15:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:26.579 15:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.579 15:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.579 15:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.579 15:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:26.579 15:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:24:26.579 15:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:26.579 15:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:26.579 15:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:26.579 15:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:26.579 15:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTgwYjQ0YWZkYjIyZDdmZTQyMDE1MWIxY2EzZWQyYjNiNDc3ZjRkZmM1MTU3MzQzlr1rUg==: 00:24:26.579 15:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWZkMDAwMmVjMWY3NDczYTg4MWY1MzBiNzY0YmRiMDasCq3Y: 00:24:26.579 15:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:26.579 15:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:26.579 15:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTgwYjQ0YWZkYjIyZDdmZTQyMDE1MWIxY2EzZWQyYjNiNDc3ZjRkZmM1MTU3MzQzlr1rUg==: 00:24:26.579 15:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWZkMDAwMmVjMWY3NDczYTg4MWY1MzBiNzY0YmRiMDasCq3Y: ]] 00:24:26.579 15:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWZkMDAwMmVjMWY3NDczYTg4MWY1MzBiNzY0YmRiMDasCq3Y: 00:24:26.579 15:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:24:26.579 15:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:26.579 15:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:26.579 15:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:26.579 15:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:26.579 15:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:26.579 15:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:26.579 15:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.579 15:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.579 15:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.579 15:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:26.579 15:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:26.579 15:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:26.579 15:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:26.579 15:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:26.579 15:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:26.579 15:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:26.579 15:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:26.579 15:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:26.579 15:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:26.579 15:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:26.579 15:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:26.579 15:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.579 15:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.144 nvme0n1 00:24:27.144 15:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.144 15:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:27.144 15:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.144 15:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:27.144 15:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.144 15:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.144 15:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:27.144 15:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:27.144 15:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.144 15:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.144 15:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.145 15:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:27.145 15:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:24:27.145 15:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:27.145 15:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:27.145 15:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:27.145 15:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:27.145 15:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2Y1YzBjMjdjN2EzZTdiNjI1NzU5OGRkNjU0NDU0MTA0YzQyZTE0ZDViMDUyYjM4MTNhYTJjMDAxODdjMTU4OFjfOGQ=: 00:24:27.145 15:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:27.145 15:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:27.145 15:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:27.145 15:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2Y1YzBjMjdjN2EzZTdiNjI1NzU5OGRkNjU0NDU0MTA0YzQyZTE0ZDViMDUyYjM4MTNhYTJjMDAxODdjMTU4OFjfOGQ=: 00:24:27.145 15:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:27.145 15:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:24:27.145 15:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:27.145 15:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:27.145 15:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:27.145 15:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:27.145 15:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:27.145 15:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:27.145 15:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.145 15:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.145 15:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.145 15:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:27.145 15:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:27.145 15:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:27.145 15:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:27.145 15:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:27.145 15:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:27.145 15:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:27.145 15:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:27.145 15:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:27.145 15:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:27.145 15:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:27.145 15:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:27.145 15:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.145 15:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.402 nvme0n1 00:24:27.402 15:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.402 15:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:27.402 15:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:27.402 15:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.402 15:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.660 15:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.660 15:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:27.661 15:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:27.661 15:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.661 15:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.661 15:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.661 15:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:27.661 15:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:27.661 15:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:24:27.661 15:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:27.661 15:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:27.661 15:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:27.661 15:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:27.661 15:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQzZjhiYWU3NTVlYTllOTgxMTgxZTk4MDQzODYyNjNLyKCw: 00:24:27.661 15:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWZhZTlkOTY1YTU0NzIxZmZhNzU0MTYxNDczZTg3YWFlMmMyNzFkMDllZjFmNWQ5MGNjMDUyODlhMGExNjdhM8n4a6s=: 00:24:27.661 15:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:27.661 15:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:27.661 15:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQzZjhiYWU3NTVlYTllOTgxMTgxZTk4MDQzODYyNjNLyKCw: 00:24:27.661 15:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWZhZTlkOTY1YTU0NzIxZmZhNzU0MTYxNDczZTg3YWFlMmMyNzFkMDllZjFmNWQ5MGNjMDUyODlhMGExNjdhM8n4a6s=: ]] 00:24:27.661 15:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWZhZTlkOTY1YTU0NzIxZmZhNzU0MTYxNDczZTg3YWFlMmMyNzFkMDllZjFmNWQ5MGNjMDUyODlhMGExNjdhM8n4a6s=: 00:24:27.661 15:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:24:27.661 15:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:27.661 15:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:27.661 15:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:27.661 15:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:27.661 15:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:27.661 15:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:27.661 15:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.661 15:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.661 15:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.661 15:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:27.661 15:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:27.661 15:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:27.661 15:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:27.661 15:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:27.661 15:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:27.661 15:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:27.661 15:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:27.661 15:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:27.661 15:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:27.661 15:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:27.661 15:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:27.661 15:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.661 15:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.593 nvme0n1 00:24:28.593 15:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.593 15:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:28.593 15:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:28.593 15:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.593 15:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.593 15:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.593 15:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:28.593 15:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:28.593 15:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.593 15:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.593 15:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.593 15:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:28.593 15:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:24:28.593 15:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:28.593 15:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:28.593 15:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:28.593 15:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:28.593 15:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmE4YWI4YmUwOTc4YjA3OWY4NmRiM2Y5Njc2NTkxNzZjZjQ2MWEyMDlmNWZkZjg4uRzKQQ==: 00:24:28.593 15:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGVlYWY3NGM4ODYxYTFmMTg1ZmJiYjZhN2Q1MGZhNWU0OWIyOTU0OTA5OWJkMDg3bSpTUg==: 00:24:28.593 15:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:28.593 15:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:28.593 15:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmE4YWI4YmUwOTc4YjA3OWY4NmRiM2Y5Njc2NTkxNzZjZjQ2MWEyMDlmNWZkZjg4uRzKQQ==: 00:24:28.593 15:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGVlYWY3NGM4ODYxYTFmMTg1ZmJiYjZhN2Q1MGZhNWU0OWIyOTU0OTA5OWJkMDg3bSpTUg==: ]] 00:24:28.594 15:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGVlYWY3NGM4ODYxYTFmMTg1ZmJiYjZhN2Q1MGZhNWU0OWIyOTU0OTA5OWJkMDg3bSpTUg==: 00:24:28.594 15:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:24:28.594 15:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:28.594 15:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:28.594 15:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:28.594 15:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:28.594 15:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:28.594 15:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:28.594 15:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.594 15:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.594 15:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.594 15:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:28.594 15:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:28.594 15:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:28.594 15:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:28.594 15:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:28.594 15:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:28.594 15:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:28.594 15:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:28.594 15:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:28.594 15:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:28.594 15:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:28.594 15:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:28.594 15:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.594 15:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.526 nvme0n1 00:24:29.526 15:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.526 15:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:29.526 15:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.526 15:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.526 15:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:29.526 15:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.526 15:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:29.526 15:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:29.526 15:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.526 15:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.526 15:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.526 15:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:29.526 15:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:24:29.526 15:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:29.526 15:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:29.526 15:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:29.526 15:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:29.526 15:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjZjNjJiZWUyYjgwZjQ5MjcwZTc5MGJlMjA2ZjNhODjAEErr: 00:24:29.526 15:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDFhZDJjYjM4OWE3NDc3MDA5ZWI4NDUyNzkzZmJkNjH65HSp: 00:24:29.526 15:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:29.526 15:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:29.526 15:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjZjNjJiZWUyYjgwZjQ5MjcwZTc5MGJlMjA2ZjNhODjAEErr: 00:24:29.526 15:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDFhZDJjYjM4OWE3NDc3MDA5ZWI4NDUyNzkzZmJkNjH65HSp: ]] 00:24:29.526 15:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDFhZDJjYjM4OWE3NDc3MDA5ZWI4NDUyNzkzZmJkNjH65HSp: 00:24:29.526 15:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:24:29.526 15:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:29.526 15:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:29.526 15:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:29.526 15:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:29.526 15:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:29.527 15:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:29.527 15:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.527 15:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.527 15:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.527 15:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:29.527 15:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:29.527 15:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:29.527 15:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:29.527 15:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:29.527 15:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:29.527 15:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:29.527 15:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:29.527 15:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:29.527 15:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:29.527 15:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:29.527 15:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:29.527 15:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.527 15:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.459 nvme0n1 00:24:30.459 15:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.459 15:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:30.459 15:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.459 15:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.459 15:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:30.459 15:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.459 15:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:30.459 15:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:30.459 15:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.459 15:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.459 15:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.459 15:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:30.459 15:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:24:30.459 15:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:30.459 15:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:30.459 15:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:30.459 15:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:30.459 15:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTgwYjQ0YWZkYjIyZDdmZTQyMDE1MWIxY2EzZWQyYjNiNDc3ZjRkZmM1MTU3MzQzlr1rUg==: 00:24:30.459 15:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWZkMDAwMmVjMWY3NDczYTg4MWY1MzBiNzY0YmRiMDasCq3Y: 00:24:30.459 15:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:30.459 15:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:30.459 15:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTgwYjQ0YWZkYjIyZDdmZTQyMDE1MWIxY2EzZWQyYjNiNDc3ZjRkZmM1MTU3MzQzlr1rUg==: 00:24:30.459 15:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWZkMDAwMmVjMWY3NDczYTg4MWY1MzBiNzY0YmRiMDasCq3Y: ]] 00:24:30.459 15:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWZkMDAwMmVjMWY3NDczYTg4MWY1MzBiNzY0YmRiMDasCq3Y: 00:24:30.459 15:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:24:30.459 15:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:30.459 15:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:30.459 15:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:30.459 15:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:30.459 15:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:30.459 15:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:30.459 15:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.459 15:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.459 15:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.459 15:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:30.459 15:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:30.459 15:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:30.459 15:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:30.459 15:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:30.459 15:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:30.459 15:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:30.459 15:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:30.459 15:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:30.459 15:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:30.459 15:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:30.459 15:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:30.459 15:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.459 15:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.391 nvme0n1 00:24:31.391 15:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.391 15:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:31.391 15:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.391 15:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.391 15:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:31.391 15:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.391 15:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:31.391 15:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:31.391 15:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.391 15:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.391 15:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.391 15:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:31.391 15:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:24:31.391 15:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:31.391 15:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:31.391 15:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:31.391 15:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:31.391 15:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2Y1YzBjMjdjN2EzZTdiNjI1NzU5OGRkNjU0NDU0MTA0YzQyZTE0ZDViMDUyYjM4MTNhYTJjMDAxODdjMTU4OFjfOGQ=: 00:24:31.391 15:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:31.391 15:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:31.391 15:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:31.391 15:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2Y1YzBjMjdjN2EzZTdiNjI1NzU5OGRkNjU0NDU0MTA0YzQyZTE0ZDViMDUyYjM4MTNhYTJjMDAxODdjMTU4OFjfOGQ=: 00:24:31.391 15:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:31.391 15:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:24:31.391 15:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:31.391 15:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:31.391 15:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:31.391 15:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:31.391 15:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:31.391 15:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:31.391 15:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.391 15:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.391 15:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.391 15:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:31.391 15:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:31.391 15:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:31.391 15:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:31.391 15:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:31.392 15:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:31.392 15:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:31.392 15:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:31.392 15:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:31.392 15:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:31.392 15:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:31.392 15:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:31.392 15:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.392 15:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.324 nvme0n1 00:24:32.324 15:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.324 15:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:32.324 15:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:32.325 15:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.325 15:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.325 15:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.325 15:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:32.325 15:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:32.325 15:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.325 15:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.325 15:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.325 15:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:32.325 15:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:32.325 15:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:32.325 15:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:24:32.325 15:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:32.325 15:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:32.325 15:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:32.325 15:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:32.325 15:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQzZjhiYWU3NTVlYTllOTgxMTgxZTk4MDQzODYyNjNLyKCw: 00:24:32.325 15:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWZhZTlkOTY1YTU0NzIxZmZhNzU0MTYxNDczZTg3YWFlMmMyNzFkMDllZjFmNWQ5MGNjMDUyODlhMGExNjdhM8n4a6s=: 00:24:32.325 15:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:32.325 15:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:32.325 15:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQzZjhiYWU3NTVlYTllOTgxMTgxZTk4MDQzODYyNjNLyKCw: 00:24:32.325 15:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWZhZTlkOTY1YTU0NzIxZmZhNzU0MTYxNDczZTg3YWFlMmMyNzFkMDllZjFmNWQ5MGNjMDUyODlhMGExNjdhM8n4a6s=: ]] 00:24:32.325 15:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWZhZTlkOTY1YTU0NzIxZmZhNzU0MTYxNDczZTg3YWFlMmMyNzFkMDllZjFmNWQ5MGNjMDUyODlhMGExNjdhM8n4a6s=: 00:24:32.325 15:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:24:32.325 15:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:32.325 15:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:32.325 15:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:32.325 15:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:32.325 15:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:32.325 15:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:32.325 15:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.325 15:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.325 15:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.325 15:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:32.325 15:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:32.325 15:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:32.325 15:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:32.325 15:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:32.325 15:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:32.325 15:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:32.325 15:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:32.325 15:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:32.325 15:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:32.325 15:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:32.325 15:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:32.325 15:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.325 15:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.325 nvme0n1 00:24:32.325 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.325 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:32.325 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:32.325 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.325 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.325 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.325 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:32.325 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:32.325 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.325 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.325 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.325 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:32.325 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:24:32.325 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:32.325 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:32.325 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:32.325 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:32.325 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmE4YWI4YmUwOTc4YjA3OWY4NmRiM2Y5Njc2NTkxNzZjZjQ2MWEyMDlmNWZkZjg4uRzKQQ==: 00:24:32.325 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGVlYWY3NGM4ODYxYTFmMTg1ZmJiYjZhN2Q1MGZhNWU0OWIyOTU0OTA5OWJkMDg3bSpTUg==: 00:24:32.325 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:32.325 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:32.325 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmE4YWI4YmUwOTc4YjA3OWY4NmRiM2Y5Njc2NTkxNzZjZjQ2MWEyMDlmNWZkZjg4uRzKQQ==: 00:24:32.325 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGVlYWY3NGM4ODYxYTFmMTg1ZmJiYjZhN2Q1MGZhNWU0OWIyOTU0OTA5OWJkMDg3bSpTUg==: ]] 00:24:32.325 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGVlYWY3NGM4ODYxYTFmMTg1ZmJiYjZhN2Q1MGZhNWU0OWIyOTU0OTA5OWJkMDg3bSpTUg==: 00:24:32.325 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:24:32.325 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:32.325 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:32.325 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:32.325 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:32.325 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:32.325 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:32.325 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.325 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.325 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.325 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:32.325 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:32.325 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:32.325 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:32.325 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:32.325 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:32.325 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:32.325 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:32.325 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:32.325 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:32.325 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:32.325 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:32.325 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.325 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.583 nvme0n1 00:24:32.583 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.583 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:32.583 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:32.583 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.583 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.583 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.583 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:32.583 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:32.583 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.583 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.583 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.583 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:32.583 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:24:32.583 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:32.583 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:32.583 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:32.583 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:32.583 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjZjNjJiZWUyYjgwZjQ5MjcwZTc5MGJlMjA2ZjNhODjAEErr: 00:24:32.583 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDFhZDJjYjM4OWE3NDc3MDA5ZWI4NDUyNzkzZmJkNjH65HSp: 00:24:32.583 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:32.583 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:32.583 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjZjNjJiZWUyYjgwZjQ5MjcwZTc5MGJlMjA2ZjNhODjAEErr: 00:24:32.583 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDFhZDJjYjM4OWE3NDc3MDA5ZWI4NDUyNzkzZmJkNjH65HSp: ]] 00:24:32.583 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDFhZDJjYjM4OWE3NDc3MDA5ZWI4NDUyNzkzZmJkNjH65HSp: 00:24:32.583 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:24:32.583 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:32.583 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:32.583 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:32.583 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:32.583 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:32.583 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:32.583 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.583 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.583 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.583 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:32.583 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:32.583 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:32.583 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:32.583 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:32.583 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:32.583 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:32.583 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:32.583 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:32.583 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:32.583 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:32.583 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:32.583 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.583 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.842 nvme0n1 00:24:32.842 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.842 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:32.842 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.842 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:32.842 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.842 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.842 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:32.842 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:32.842 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.842 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.842 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.842 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:32.842 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:24:32.842 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:32.842 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:32.842 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:32.842 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:32.842 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTgwYjQ0YWZkYjIyZDdmZTQyMDE1MWIxY2EzZWQyYjNiNDc3ZjRkZmM1MTU3MzQzlr1rUg==: 00:24:32.842 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWZkMDAwMmVjMWY3NDczYTg4MWY1MzBiNzY0YmRiMDasCq3Y: 00:24:32.842 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:32.842 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:32.842 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTgwYjQ0YWZkYjIyZDdmZTQyMDE1MWIxY2EzZWQyYjNiNDc3ZjRkZmM1MTU3MzQzlr1rUg==: 00:24:32.842 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWZkMDAwMmVjMWY3NDczYTg4MWY1MzBiNzY0YmRiMDasCq3Y: ]] 00:24:32.842 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWZkMDAwMmVjMWY3NDczYTg4MWY1MzBiNzY0YmRiMDasCq3Y: 00:24:32.842 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:24:32.842 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:32.842 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:32.842 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:32.842 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:32.842 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:32.842 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:32.842 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.842 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.842 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.842 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:32.842 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:32.842 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:32.842 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:32.842 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:32.842 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:32.842 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:32.842 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:32.842 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:32.842 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:32.842 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:32.842 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:32.842 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.842 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.100 nvme0n1 00:24:33.100 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.100 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:33.100 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.100 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.100 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:33.100 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.100 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:33.100 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:33.100 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.100 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.100 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.100 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:33.100 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:24:33.100 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:33.100 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:33.100 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:33.100 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:33.100 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2Y1YzBjMjdjN2EzZTdiNjI1NzU5OGRkNjU0NDU0MTA0YzQyZTE0ZDViMDUyYjM4MTNhYTJjMDAxODdjMTU4OFjfOGQ=: 00:24:33.100 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:33.100 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:33.100 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:33.100 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2Y1YzBjMjdjN2EzZTdiNjI1NzU5OGRkNjU0NDU0MTA0YzQyZTE0ZDViMDUyYjM4MTNhYTJjMDAxODdjMTU4OFjfOGQ=: 00:24:33.100 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:33.100 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:24:33.100 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:33.100 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:33.100 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:33.100 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:33.100 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:33.100 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:33.100 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.100 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.100 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.100 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:33.100 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:33.100 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:33.100 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:33.100 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:33.100 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:33.100 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:33.100 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:33.100 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:33.100 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:33.100 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:33.100 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:33.100 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.100 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.358 nvme0n1 00:24:33.358 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.358 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:33.358 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.358 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.358 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:33.358 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.358 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:33.358 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:33.358 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.358 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.358 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.358 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:33.358 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:33.358 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:24:33.358 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:33.358 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:33.358 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:33.359 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:33.359 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQzZjhiYWU3NTVlYTllOTgxMTgxZTk4MDQzODYyNjNLyKCw: 00:24:33.359 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWZhZTlkOTY1YTU0NzIxZmZhNzU0MTYxNDczZTg3YWFlMmMyNzFkMDllZjFmNWQ5MGNjMDUyODlhMGExNjdhM8n4a6s=: 00:24:33.359 15:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:33.359 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:33.359 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQzZjhiYWU3NTVlYTllOTgxMTgxZTk4MDQzODYyNjNLyKCw: 00:24:33.359 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWZhZTlkOTY1YTU0NzIxZmZhNzU0MTYxNDczZTg3YWFlMmMyNzFkMDllZjFmNWQ5MGNjMDUyODlhMGExNjdhM8n4a6s=: ]] 00:24:33.359 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWZhZTlkOTY1YTU0NzIxZmZhNzU0MTYxNDczZTg3YWFlMmMyNzFkMDllZjFmNWQ5MGNjMDUyODlhMGExNjdhM8n4a6s=: 00:24:33.359 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:24:33.359 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:33.359 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:33.359 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:33.359 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:33.359 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:33.359 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:33.359 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.359 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.359 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.359 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:33.359 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:33.359 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:33.359 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:33.359 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:33.359 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:33.359 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:33.359 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:33.359 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:33.359 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:33.359 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:33.359 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:33.359 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.359 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.617 nvme0n1 00:24:33.617 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.617 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:33.617 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.617 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.617 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:33.617 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.617 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:33.617 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:33.617 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.617 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.617 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.617 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:33.617 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:24:33.617 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:33.617 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:33.617 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:33.617 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:33.617 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmE4YWI4YmUwOTc4YjA3OWY4NmRiM2Y5Njc2NTkxNzZjZjQ2MWEyMDlmNWZkZjg4uRzKQQ==: 00:24:33.617 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGVlYWY3NGM4ODYxYTFmMTg1ZmJiYjZhN2Q1MGZhNWU0OWIyOTU0OTA5OWJkMDg3bSpTUg==: 00:24:33.617 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:33.617 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:33.617 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmE4YWI4YmUwOTc4YjA3OWY4NmRiM2Y5Njc2NTkxNzZjZjQ2MWEyMDlmNWZkZjg4uRzKQQ==: 00:24:33.617 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGVlYWY3NGM4ODYxYTFmMTg1ZmJiYjZhN2Q1MGZhNWU0OWIyOTU0OTA5OWJkMDg3bSpTUg==: ]] 00:24:33.617 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGVlYWY3NGM4ODYxYTFmMTg1ZmJiYjZhN2Q1MGZhNWU0OWIyOTU0OTA5OWJkMDg3bSpTUg==: 00:24:33.617 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:24:33.617 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:33.617 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:33.617 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:33.617 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:33.617 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:33.617 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:33.618 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.618 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.618 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.618 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:33.618 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:33.618 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:33.618 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:33.618 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:33.618 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:33.618 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:33.618 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:33.618 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:33.618 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:33.618 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:33.618 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:33.618 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.618 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.876 nvme0n1 00:24:33.876 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.876 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:33.876 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.876 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.876 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:33.876 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.876 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:33.876 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:33.876 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.876 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.876 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.876 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:33.876 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:24:33.876 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:33.876 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:33.876 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:33.876 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:33.876 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjZjNjJiZWUyYjgwZjQ5MjcwZTc5MGJlMjA2ZjNhODjAEErr: 00:24:33.876 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDFhZDJjYjM4OWE3NDc3MDA5ZWI4NDUyNzkzZmJkNjH65HSp: 00:24:33.876 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:33.876 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:33.876 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjZjNjJiZWUyYjgwZjQ5MjcwZTc5MGJlMjA2ZjNhODjAEErr: 00:24:33.876 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDFhZDJjYjM4OWE3NDc3MDA5ZWI4NDUyNzkzZmJkNjH65HSp: ]] 00:24:33.876 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDFhZDJjYjM4OWE3NDc3MDA5ZWI4NDUyNzkzZmJkNjH65HSp: 00:24:33.876 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:24:33.876 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:33.876 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:33.876 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:33.876 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:33.876 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:33.876 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:33.876 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.876 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.876 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.876 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:33.876 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:33.876 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:33.876 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:33.876 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:33.876 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:33.876 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:33.876 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:33.876 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:33.876 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:33.876 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:33.876 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:33.876 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.876 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.135 nvme0n1 00:24:34.135 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.135 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:34.135 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:34.135 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.135 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.135 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.135 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:34.135 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:34.135 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.135 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.135 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.135 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:34.135 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:24:34.135 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:34.135 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:34.135 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:34.135 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:34.135 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTgwYjQ0YWZkYjIyZDdmZTQyMDE1MWIxY2EzZWQyYjNiNDc3ZjRkZmM1MTU3MzQzlr1rUg==: 00:24:34.135 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWZkMDAwMmVjMWY3NDczYTg4MWY1MzBiNzY0YmRiMDasCq3Y: 00:24:34.135 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:34.135 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:34.135 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTgwYjQ0YWZkYjIyZDdmZTQyMDE1MWIxY2EzZWQyYjNiNDc3ZjRkZmM1MTU3MzQzlr1rUg==: 00:24:34.135 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWZkMDAwMmVjMWY3NDczYTg4MWY1MzBiNzY0YmRiMDasCq3Y: ]] 00:24:34.135 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWZkMDAwMmVjMWY3NDczYTg4MWY1MzBiNzY0YmRiMDasCq3Y: 00:24:34.135 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:24:34.135 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:34.135 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:34.135 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:34.135 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:34.135 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:34.135 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:34.135 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.135 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.135 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.135 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:34.135 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:34.135 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:34.135 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:34.135 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:34.135 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:34.135 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:34.135 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:34.135 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:34.135 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:34.135 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:34.135 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:34.135 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.135 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.394 nvme0n1 00:24:34.394 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.394 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:34.394 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:34.394 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.394 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.394 15:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.394 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:34.394 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:34.394 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.394 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.394 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.394 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:34.394 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:24:34.394 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:34.394 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:34.394 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:34.394 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:34.394 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2Y1YzBjMjdjN2EzZTdiNjI1NzU5OGRkNjU0NDU0MTA0YzQyZTE0ZDViMDUyYjM4MTNhYTJjMDAxODdjMTU4OFjfOGQ=: 00:24:34.394 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:34.394 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:34.394 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:34.394 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2Y1YzBjMjdjN2EzZTdiNjI1NzU5OGRkNjU0NDU0MTA0YzQyZTE0ZDViMDUyYjM4MTNhYTJjMDAxODdjMTU4OFjfOGQ=: 00:24:34.394 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:34.394 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:24:34.394 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:34.394 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:34.394 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:34.394 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:34.394 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:34.394 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:34.394 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.394 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.394 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.394 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:34.394 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:34.394 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:34.394 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:34.394 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:34.394 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:34.394 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:34.394 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:34.394 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:34.394 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:34.394 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:34.394 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:34.394 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.394 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.652 nvme0n1 00:24:34.652 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.652 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:34.652 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:34.652 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.652 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.652 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.652 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:34.652 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:34.652 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.652 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.652 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.652 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:34.652 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:34.652 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:24:34.652 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:34.652 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:34.652 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:34.652 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:34.652 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQzZjhiYWU3NTVlYTllOTgxMTgxZTk4MDQzODYyNjNLyKCw: 00:24:34.652 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWZhZTlkOTY1YTU0NzIxZmZhNzU0MTYxNDczZTg3YWFlMmMyNzFkMDllZjFmNWQ5MGNjMDUyODlhMGExNjdhM8n4a6s=: 00:24:34.652 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:34.652 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:34.652 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQzZjhiYWU3NTVlYTllOTgxMTgxZTk4MDQzODYyNjNLyKCw: 00:24:34.652 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWZhZTlkOTY1YTU0NzIxZmZhNzU0MTYxNDczZTg3YWFlMmMyNzFkMDllZjFmNWQ5MGNjMDUyODlhMGExNjdhM8n4a6s=: ]] 00:24:34.652 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWZhZTlkOTY1YTU0NzIxZmZhNzU0MTYxNDczZTg3YWFlMmMyNzFkMDllZjFmNWQ5MGNjMDUyODlhMGExNjdhM8n4a6s=: 00:24:34.652 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:24:34.652 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:34.652 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:34.652 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:34.652 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:34.652 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:34.652 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:34.652 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.652 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.652 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.652 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:34.652 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:34.652 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:34.652 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:34.652 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:34.652 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:34.652 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:34.652 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:34.652 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:34.652 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:34.652 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:34.652 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:34.652 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.652 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.910 nvme0n1 00:24:34.910 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.910 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:34.910 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.910 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.910 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:34.910 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.910 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:34.910 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:34.910 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.910 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.910 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.910 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:34.910 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:24:34.910 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:34.910 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:34.910 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:34.910 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:34.910 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmE4YWI4YmUwOTc4YjA3OWY4NmRiM2Y5Njc2NTkxNzZjZjQ2MWEyMDlmNWZkZjg4uRzKQQ==: 00:24:34.910 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGVlYWY3NGM4ODYxYTFmMTg1ZmJiYjZhN2Q1MGZhNWU0OWIyOTU0OTA5OWJkMDg3bSpTUg==: 00:24:34.910 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:34.910 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:34.910 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmE4YWI4YmUwOTc4YjA3OWY4NmRiM2Y5Njc2NTkxNzZjZjQ2MWEyMDlmNWZkZjg4uRzKQQ==: 00:24:34.910 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGVlYWY3NGM4ODYxYTFmMTg1ZmJiYjZhN2Q1MGZhNWU0OWIyOTU0OTA5OWJkMDg3bSpTUg==: ]] 00:24:34.910 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGVlYWY3NGM4ODYxYTFmMTg1ZmJiYjZhN2Q1MGZhNWU0OWIyOTU0OTA5OWJkMDg3bSpTUg==: 00:24:34.910 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:24:34.910 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:34.910 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:34.910 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:34.910 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:34.910 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:34.910 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:34.910 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.910 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.910 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.910 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:34.910 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:34.910 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:34.910 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:34.910 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:34.910 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:34.910 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:34.910 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:34.910 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:34.910 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:34.910 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:34.910 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:34.910 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.911 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.168 nvme0n1 00:24:35.168 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.168 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:35.168 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.168 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.168 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:35.168 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.426 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:35.426 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:35.426 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.426 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.426 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.426 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:35.426 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:24:35.426 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:35.426 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:35.426 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:35.426 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:35.426 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjZjNjJiZWUyYjgwZjQ5MjcwZTc5MGJlMjA2ZjNhODjAEErr: 00:24:35.426 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDFhZDJjYjM4OWE3NDc3MDA5ZWI4NDUyNzkzZmJkNjH65HSp: 00:24:35.426 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:35.426 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:35.426 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjZjNjJiZWUyYjgwZjQ5MjcwZTc5MGJlMjA2ZjNhODjAEErr: 00:24:35.426 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDFhZDJjYjM4OWE3NDc3MDA5ZWI4NDUyNzkzZmJkNjH65HSp: ]] 00:24:35.426 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDFhZDJjYjM4OWE3NDc3MDA5ZWI4NDUyNzkzZmJkNjH65HSp: 00:24:35.426 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:24:35.426 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:35.426 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:35.426 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:35.426 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:35.426 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:35.426 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:35.426 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.426 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.426 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.426 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:35.426 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:35.426 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:35.426 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:35.426 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:35.426 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:35.426 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:35.426 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:35.426 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:35.426 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:35.426 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:35.426 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:35.426 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.426 15:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.684 nvme0n1 00:24:35.684 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.684 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:35.684 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.684 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.684 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:35.684 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.684 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:35.684 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:35.684 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.684 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.684 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.684 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:35.684 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:24:35.684 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:35.684 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:35.684 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:35.684 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:35.684 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTgwYjQ0YWZkYjIyZDdmZTQyMDE1MWIxY2EzZWQyYjNiNDc3ZjRkZmM1MTU3MzQzlr1rUg==: 00:24:35.684 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWZkMDAwMmVjMWY3NDczYTg4MWY1MzBiNzY0YmRiMDasCq3Y: 00:24:35.684 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:35.684 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:35.684 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTgwYjQ0YWZkYjIyZDdmZTQyMDE1MWIxY2EzZWQyYjNiNDc3ZjRkZmM1MTU3MzQzlr1rUg==: 00:24:35.684 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWZkMDAwMmVjMWY3NDczYTg4MWY1MzBiNzY0YmRiMDasCq3Y: ]] 00:24:35.684 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWZkMDAwMmVjMWY3NDczYTg4MWY1MzBiNzY0YmRiMDasCq3Y: 00:24:35.684 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:24:35.684 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:35.684 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:35.684 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:35.684 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:35.685 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:35.685 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:35.685 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.685 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.685 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.685 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:35.685 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:35.685 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:35.685 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:35.685 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:35.685 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:35.685 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:35.685 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:35.685 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:35.685 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:35.685 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:35.685 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:35.685 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.685 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.943 nvme0n1 00:24:35.943 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.943 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:35.943 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.943 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.943 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:35.943 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.943 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:35.943 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:35.943 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.943 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.943 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.943 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:35.943 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:24:35.943 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:35.943 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:35.943 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:35.943 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:35.943 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2Y1YzBjMjdjN2EzZTdiNjI1NzU5OGRkNjU0NDU0MTA0YzQyZTE0ZDViMDUyYjM4MTNhYTJjMDAxODdjMTU4OFjfOGQ=: 00:24:35.943 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:35.943 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:35.943 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:35.944 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2Y1YzBjMjdjN2EzZTdiNjI1NzU5OGRkNjU0NDU0MTA0YzQyZTE0ZDViMDUyYjM4MTNhYTJjMDAxODdjMTU4OFjfOGQ=: 00:24:35.944 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:35.944 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:24:35.944 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:35.944 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:35.944 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:35.944 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:35.944 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:35.944 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:35.944 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.944 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.944 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.944 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:35.944 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:35.944 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:35.944 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:35.944 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:35.944 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:35.944 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:35.944 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:35.944 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:35.944 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:35.944 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:35.944 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:35.944 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.944 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.201 nvme0n1 00:24:36.201 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.202 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:36.202 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.202 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:36.202 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.202 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.202 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:36.202 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:36.202 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.202 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.202 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.202 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:36.202 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:36.202 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:24:36.202 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:36.202 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:36.202 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:36.202 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:36.202 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQzZjhiYWU3NTVlYTllOTgxMTgxZTk4MDQzODYyNjNLyKCw: 00:24:36.202 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWZhZTlkOTY1YTU0NzIxZmZhNzU0MTYxNDczZTg3YWFlMmMyNzFkMDllZjFmNWQ5MGNjMDUyODlhMGExNjdhM8n4a6s=: 00:24:36.202 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:36.202 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:36.202 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQzZjhiYWU3NTVlYTllOTgxMTgxZTk4MDQzODYyNjNLyKCw: 00:24:36.202 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWZhZTlkOTY1YTU0NzIxZmZhNzU0MTYxNDczZTg3YWFlMmMyNzFkMDllZjFmNWQ5MGNjMDUyODlhMGExNjdhM8n4a6s=: ]] 00:24:36.202 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWZhZTlkOTY1YTU0NzIxZmZhNzU0MTYxNDczZTg3YWFlMmMyNzFkMDllZjFmNWQ5MGNjMDUyODlhMGExNjdhM8n4a6s=: 00:24:36.202 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:24:36.202 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:36.202 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:36.202 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:36.202 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:36.202 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:36.202 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:36.202 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.202 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.459 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.459 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:36.459 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:36.459 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:36.459 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:36.459 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:36.459 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:36.459 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:36.459 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:36.459 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:36.459 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:36.459 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:36.459 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:36.459 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.459 15:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.717 nvme0n1 00:24:36.717 15:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.717 15:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:36.717 15:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.717 15:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.717 15:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:36.717 15:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.975 15:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:36.975 15:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:36.975 15:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.975 15:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.975 15:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.975 15:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:36.975 15:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:24:36.975 15:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:36.975 15:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:36.975 15:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:36.975 15:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:36.975 15:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmE4YWI4YmUwOTc4YjA3OWY4NmRiM2Y5Njc2NTkxNzZjZjQ2MWEyMDlmNWZkZjg4uRzKQQ==: 00:24:36.975 15:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGVlYWY3NGM4ODYxYTFmMTg1ZmJiYjZhN2Q1MGZhNWU0OWIyOTU0OTA5OWJkMDg3bSpTUg==: 00:24:36.975 15:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:36.975 15:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:36.975 15:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmE4YWI4YmUwOTc4YjA3OWY4NmRiM2Y5Njc2NTkxNzZjZjQ2MWEyMDlmNWZkZjg4uRzKQQ==: 00:24:36.975 15:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGVlYWY3NGM4ODYxYTFmMTg1ZmJiYjZhN2Q1MGZhNWU0OWIyOTU0OTA5OWJkMDg3bSpTUg==: ]] 00:24:36.975 15:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGVlYWY3NGM4ODYxYTFmMTg1ZmJiYjZhN2Q1MGZhNWU0OWIyOTU0OTA5OWJkMDg3bSpTUg==: 00:24:36.975 15:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:24:36.975 15:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:36.975 15:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:36.975 15:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:36.975 15:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:36.975 15:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:36.976 15:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:36.976 15:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.976 15:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.976 15:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.976 15:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:36.976 15:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:36.976 15:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:36.976 15:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:36.976 15:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:36.976 15:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:36.976 15:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:36.976 15:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:36.976 15:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:36.976 15:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:36.976 15:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:36.976 15:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:36.976 15:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.976 15:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.233 nvme0n1 00:24:37.233 15:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.234 15:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:37.234 15:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:37.234 15:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.234 15:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.492 15:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.492 15:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:37.492 15:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:37.492 15:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.492 15:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.492 15:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.492 15:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:37.492 15:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:24:37.492 15:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:37.492 15:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:37.492 15:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:37.492 15:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:37.492 15:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjZjNjJiZWUyYjgwZjQ5MjcwZTc5MGJlMjA2ZjNhODjAEErr: 00:24:37.492 15:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDFhZDJjYjM4OWE3NDc3MDA5ZWI4NDUyNzkzZmJkNjH65HSp: 00:24:37.492 15:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:37.492 15:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:37.492 15:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjZjNjJiZWUyYjgwZjQ5MjcwZTc5MGJlMjA2ZjNhODjAEErr: 00:24:37.492 15:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDFhZDJjYjM4OWE3NDc3MDA5ZWI4NDUyNzkzZmJkNjH65HSp: ]] 00:24:37.492 15:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDFhZDJjYjM4OWE3NDc3MDA5ZWI4NDUyNzkzZmJkNjH65HSp: 00:24:37.492 15:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:24:37.492 15:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:37.492 15:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:37.492 15:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:37.492 15:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:37.492 15:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:37.492 15:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:37.492 15:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.492 15:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.492 15:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.492 15:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:37.492 15:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:37.492 15:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:37.492 15:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:37.492 15:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:37.492 15:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:37.492 15:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:37.492 15:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:37.492 15:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:37.492 15:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:37.492 15:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:37.492 15:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:37.492 15:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.492 15:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.058 nvme0n1 00:24:38.058 15:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.058 15:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:38.058 15:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.058 15:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.058 15:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:38.058 15:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.058 15:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:38.058 15:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:38.058 15:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.058 15:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.058 15:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.058 15:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:38.058 15:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:24:38.058 15:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:38.058 15:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:38.058 15:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:38.058 15:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:38.058 15:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTgwYjQ0YWZkYjIyZDdmZTQyMDE1MWIxY2EzZWQyYjNiNDc3ZjRkZmM1MTU3MzQzlr1rUg==: 00:24:38.058 15:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWZkMDAwMmVjMWY3NDczYTg4MWY1MzBiNzY0YmRiMDasCq3Y: 00:24:38.058 15:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:38.058 15:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:38.058 15:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTgwYjQ0YWZkYjIyZDdmZTQyMDE1MWIxY2EzZWQyYjNiNDc3ZjRkZmM1MTU3MzQzlr1rUg==: 00:24:38.058 15:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWZkMDAwMmVjMWY3NDczYTg4MWY1MzBiNzY0YmRiMDasCq3Y: ]] 00:24:38.058 15:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWZkMDAwMmVjMWY3NDczYTg4MWY1MzBiNzY0YmRiMDasCq3Y: 00:24:38.058 15:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:24:38.058 15:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:38.058 15:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:38.058 15:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:38.058 15:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:38.058 15:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:38.058 15:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:38.058 15:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.058 15:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.058 15:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.058 15:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:38.058 15:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:38.058 15:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:38.058 15:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:38.058 15:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:38.058 15:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:38.058 15:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:38.058 15:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:38.058 15:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:38.058 15:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:38.058 15:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:38.058 15:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:38.058 15:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.058 15:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.623 nvme0n1 00:24:38.623 15:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.623 15:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:38.623 15:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.623 15:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.623 15:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:38.623 15:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.623 15:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:38.623 15:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:38.623 15:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.623 15:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.623 15:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.623 15:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:38.623 15:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:24:38.623 15:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:38.623 15:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:38.623 15:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:38.623 15:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:38.623 15:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2Y1YzBjMjdjN2EzZTdiNjI1NzU5OGRkNjU0NDU0MTA0YzQyZTE0ZDViMDUyYjM4MTNhYTJjMDAxODdjMTU4OFjfOGQ=: 00:24:38.623 15:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:38.623 15:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:38.623 15:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:38.623 15:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2Y1YzBjMjdjN2EzZTdiNjI1NzU5OGRkNjU0NDU0MTA0YzQyZTE0ZDViMDUyYjM4MTNhYTJjMDAxODdjMTU4OFjfOGQ=: 00:24:38.623 15:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:38.623 15:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:24:38.623 15:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:38.623 15:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:38.623 15:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:38.623 15:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:38.623 15:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:38.624 15:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:38.624 15:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.624 15:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.624 15:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.624 15:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:38.624 15:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:38.624 15:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:38.624 15:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:38.624 15:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:38.624 15:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:38.624 15:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:38.624 15:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:38.624 15:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:38.624 15:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:38.624 15:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:38.624 15:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:38.624 15:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.624 15:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.189 nvme0n1 00:24:39.189 15:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.189 15:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:39.189 15:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.189 15:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.189 15:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:39.189 15:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.189 15:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:39.189 15:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:39.189 15:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.189 15:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.189 15:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.189 15:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:39.189 15:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:39.189 15:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:24:39.189 15:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:39.189 15:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:39.189 15:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:39.189 15:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:39.189 15:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQzZjhiYWU3NTVlYTllOTgxMTgxZTk4MDQzODYyNjNLyKCw: 00:24:39.189 15:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YWZhZTlkOTY1YTU0NzIxZmZhNzU0MTYxNDczZTg3YWFlMmMyNzFkMDllZjFmNWQ5MGNjMDUyODlhMGExNjdhM8n4a6s=: 00:24:39.189 15:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:39.189 15:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:39.189 15:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQzZjhiYWU3NTVlYTllOTgxMTgxZTk4MDQzODYyNjNLyKCw: 00:24:39.189 15:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YWZhZTlkOTY1YTU0NzIxZmZhNzU0MTYxNDczZTg3YWFlMmMyNzFkMDllZjFmNWQ5MGNjMDUyODlhMGExNjdhM8n4a6s=: ]] 00:24:39.189 15:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YWZhZTlkOTY1YTU0NzIxZmZhNzU0MTYxNDczZTg3YWFlMmMyNzFkMDllZjFmNWQ5MGNjMDUyODlhMGExNjdhM8n4a6s=: 00:24:39.189 15:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:24:39.189 15:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:39.189 15:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:39.189 15:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:39.189 15:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:39.189 15:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:39.189 15:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:39.189 15:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.189 15:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.189 15:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.189 15:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:39.189 15:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:39.189 15:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:39.189 15:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:39.189 15:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:39.189 15:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:39.189 15:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:39.189 15:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:39.189 15:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:39.189 15:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:39.189 15:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:39.189 15:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:39.189 15:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.189 15:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.122 nvme0n1 00:24:40.122 15:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.122 15:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:40.122 15:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:40.122 15:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.122 15:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.122 15:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.122 15:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:40.122 15:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:40.122 15:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.122 15:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.122 15:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.122 15:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:40.122 15:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:24:40.122 15:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:40.122 15:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:40.122 15:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:40.122 15:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:40.122 15:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmE4YWI4YmUwOTc4YjA3OWY4NmRiM2Y5Njc2NTkxNzZjZjQ2MWEyMDlmNWZkZjg4uRzKQQ==: 00:24:40.122 15:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGVlYWY3NGM4ODYxYTFmMTg1ZmJiYjZhN2Q1MGZhNWU0OWIyOTU0OTA5OWJkMDg3bSpTUg==: 00:24:40.122 15:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:40.122 15:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:40.122 15:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmE4YWI4YmUwOTc4YjA3OWY4NmRiM2Y5Njc2NTkxNzZjZjQ2MWEyMDlmNWZkZjg4uRzKQQ==: 00:24:40.122 15:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGVlYWY3NGM4ODYxYTFmMTg1ZmJiYjZhN2Q1MGZhNWU0OWIyOTU0OTA5OWJkMDg3bSpTUg==: ]] 00:24:40.122 15:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGVlYWY3NGM4ODYxYTFmMTg1ZmJiYjZhN2Q1MGZhNWU0OWIyOTU0OTA5OWJkMDg3bSpTUg==: 00:24:40.122 15:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:24:40.122 15:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:40.122 15:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:40.122 15:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:40.122 15:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:40.122 15:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:40.122 15:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:40.122 15:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.122 15:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.122 15:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.122 15:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:40.122 15:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:40.122 15:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:40.122 15:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:40.122 15:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:40.122 15:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:40.122 15:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:40.122 15:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:40.122 15:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:40.122 15:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:40.123 15:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:40.123 15:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:40.123 15:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.123 15:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.055 nvme0n1 00:24:41.055 15:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.055 15:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:41.055 15:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:41.055 15:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.055 15:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.055 15:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.055 15:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:41.055 15:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:41.055 15:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.055 15:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.055 15:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.055 15:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:41.055 15:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:24:41.055 15:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:41.055 15:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:41.056 15:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:41.056 15:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:41.056 15:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjZjNjJiZWUyYjgwZjQ5MjcwZTc5MGJlMjA2ZjNhODjAEErr: 00:24:41.056 15:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDFhZDJjYjM4OWE3NDc3MDA5ZWI4NDUyNzkzZmJkNjH65HSp: 00:24:41.056 15:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:41.056 15:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:41.056 15:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjZjNjJiZWUyYjgwZjQ5MjcwZTc5MGJlMjA2ZjNhODjAEErr: 00:24:41.056 15:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDFhZDJjYjM4OWE3NDc3MDA5ZWI4NDUyNzkzZmJkNjH65HSp: ]] 00:24:41.056 15:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDFhZDJjYjM4OWE3NDc3MDA5ZWI4NDUyNzkzZmJkNjH65HSp: 00:24:41.056 15:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:24:41.056 15:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:41.056 15:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:41.056 15:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:41.056 15:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:41.056 15:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:41.056 15:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:41.056 15:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.056 15:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.056 15:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.056 15:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:41.056 15:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:41.056 15:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:41.056 15:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:41.056 15:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:41.056 15:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:41.056 15:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:41.056 15:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:41.056 15:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:41.056 15:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:41.056 15:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:41.056 15:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:41.056 15:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.056 15:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.008 nvme0n1 00:24:42.008 15:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.008 15:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:42.008 15:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.008 15:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:42.008 15:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.008 15:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.008 15:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:42.008 15:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:42.008 15:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.008 15:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.008 15:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.008 15:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:42.008 15:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:24:42.008 15:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:42.008 15:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:42.008 15:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:42.008 15:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:42.008 15:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YTgwYjQ0YWZkYjIyZDdmZTQyMDE1MWIxY2EzZWQyYjNiNDc3ZjRkZmM1MTU3MzQzlr1rUg==: 00:24:42.008 15:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWZkMDAwMmVjMWY3NDczYTg4MWY1MzBiNzY0YmRiMDasCq3Y: 00:24:42.008 15:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:42.008 15:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:42.008 15:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YTgwYjQ0YWZkYjIyZDdmZTQyMDE1MWIxY2EzZWQyYjNiNDc3ZjRkZmM1MTU3MzQzlr1rUg==: 00:24:42.008 15:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWZkMDAwMmVjMWY3NDczYTg4MWY1MzBiNzY0YmRiMDasCq3Y: ]] 00:24:42.008 15:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWZkMDAwMmVjMWY3NDczYTg4MWY1MzBiNzY0YmRiMDasCq3Y: 00:24:42.009 15:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:24:42.009 15:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:42.009 15:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:42.009 15:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:42.009 15:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:42.009 15:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:42.009 15:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:42.009 15:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.009 15:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.009 15:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.009 15:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:42.009 15:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:42.009 15:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:42.009 15:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:42.009 15:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:42.009 15:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:42.009 15:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:42.009 15:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:42.009 15:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:42.009 15:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:42.009 15:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:42.009 15:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:42.009 15:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.009 15:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.653 nvme0n1 00:24:42.653 15:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.653 15:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:42.653 15:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.653 15:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.653 15:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:42.653 15:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.653 15:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:42.653 15:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:42.653 15:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.654 15:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.654 15:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.654 15:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:42.654 15:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:24:42.654 15:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:42.654 15:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:42.654 15:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:42.654 15:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:42.654 15:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2Y1YzBjMjdjN2EzZTdiNjI1NzU5OGRkNjU0NDU0MTA0YzQyZTE0ZDViMDUyYjM4MTNhYTJjMDAxODdjMTU4OFjfOGQ=: 00:24:42.654 15:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:42.654 15:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:42.654 15:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:42.654 15:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2Y1YzBjMjdjN2EzZTdiNjI1NzU5OGRkNjU0NDU0MTA0YzQyZTE0ZDViMDUyYjM4MTNhYTJjMDAxODdjMTU4OFjfOGQ=: 00:24:42.654 15:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:42.654 15:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:24:42.654 15:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:42.654 15:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:42.654 15:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:42.654 15:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:42.654 15:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:42.654 15:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:42.654 15:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.654 15:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.654 15:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.654 15:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:42.654 15:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:42.654 15:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:42.654 15:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:42.654 15:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:42.654 15:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:42.654 15:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:42.654 15:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:42.654 15:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:42.654 15:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:42.654 15:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:42.654 15:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:42.654 15:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.654 15:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.587 nvme0n1 00:24:43.587 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.587 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:43.587 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:43.587 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.587 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.587 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.587 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:43.587 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:43.587 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.587 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.587 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.587 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:43.587 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:43.587 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:43.587 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:43.587 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:43.587 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmE4YWI4YmUwOTc4YjA3OWY4NmRiM2Y5Njc2NTkxNzZjZjQ2MWEyMDlmNWZkZjg4uRzKQQ==: 00:24:43.587 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGVlYWY3NGM4ODYxYTFmMTg1ZmJiYjZhN2Q1MGZhNWU0OWIyOTU0OTA5OWJkMDg3bSpTUg==: 00:24:43.587 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:43.587 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:43.587 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmE4YWI4YmUwOTc4YjA3OWY4NmRiM2Y5Njc2NTkxNzZjZjQ2MWEyMDlmNWZkZjg4uRzKQQ==: 00:24:43.587 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGVlYWY3NGM4ODYxYTFmMTg1ZmJiYjZhN2Q1MGZhNWU0OWIyOTU0OTA5OWJkMDg3bSpTUg==: ]] 00:24:43.587 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGVlYWY3NGM4ODYxYTFmMTg1ZmJiYjZhN2Q1MGZhNWU0OWIyOTU0OTA5OWJkMDg3bSpTUg==: 00:24:43.587 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:43.587 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.587 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.587 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.587 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:24:43.587 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:43.587 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:43.587 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:43.587 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:43.587 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:43.587 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:43.587 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:43.587 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:43.587 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:43.587 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:43.587 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:43.587 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:24:43.587 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:43.587 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:43.587 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:43.587 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:43.587 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:43.587 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:43.587 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.587 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.847 request: 00:24:43.847 { 00:24:43.847 "name": "nvme0", 00:24:43.847 "trtype": "tcp", 00:24:43.847 "traddr": "10.0.0.1", 00:24:43.847 "adrfam": "ipv4", 00:24:43.847 "trsvcid": "4420", 00:24:43.847 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:43.847 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:43.847 "prchk_reftag": false, 00:24:43.847 "prchk_guard": false, 00:24:43.847 "hdgst": false, 00:24:43.847 "ddgst": false, 00:24:43.847 "allow_unrecognized_csi": false, 00:24:43.847 "method": "bdev_nvme_attach_controller", 00:24:43.847 "req_id": 1 00:24:43.847 } 00:24:43.847 Got JSON-RPC error response 00:24:43.847 response: 00:24:43.847 { 00:24:43.847 "code": -5, 00:24:43.847 "message": "Input/output error" 00:24:43.847 } 00:24:43.847 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:43.847 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:24:43.847 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:43.847 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:43.847 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:43.847 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:24:43.847 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:24:43.847 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.847 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.847 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.847 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:24:43.847 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:24:43.847 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:43.847 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:43.847 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:43.847 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:43.847 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:43.847 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:43.847 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:43.847 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:43.847 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:43.847 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:43.847 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:43.847 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:24:43.847 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:43.847 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:43.847 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:43.847 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:43.847 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:43.847 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:43.847 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.847 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.847 request: 00:24:43.847 { 00:24:43.847 "name": "nvme0", 00:24:43.847 "trtype": "tcp", 00:24:43.847 "traddr": "10.0.0.1", 00:24:43.847 "adrfam": "ipv4", 00:24:43.847 "trsvcid": "4420", 00:24:43.847 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:43.847 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:43.847 "prchk_reftag": false, 00:24:43.847 "prchk_guard": false, 00:24:43.847 "hdgst": false, 00:24:43.847 "ddgst": false, 00:24:43.847 "dhchap_key": "key2", 00:24:43.847 "allow_unrecognized_csi": false, 00:24:43.847 "method": "bdev_nvme_attach_controller", 00:24:43.847 "req_id": 1 00:24:43.847 } 00:24:43.847 Got JSON-RPC error response 00:24:43.847 response: 00:24:43.847 { 00:24:43.847 "code": -5, 00:24:43.847 "message": "Input/output error" 00:24:43.847 } 00:24:43.847 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:43.847 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:24:43.847 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:43.847 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:43.847 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:43.847 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:24:43.847 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:24:43.847 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.847 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.847 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.847 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:24:43.847 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:24:43.847 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:43.847 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:43.847 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:43.847 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:43.847 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:43.847 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:43.847 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:43.847 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:43.847 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:43.847 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:43.847 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:43.847 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:24:43.847 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:43.847 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:43.847 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:43.847 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:43.847 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:43.847 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:43.847 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.847 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.106 request: 00:24:44.106 { 00:24:44.106 "name": "nvme0", 00:24:44.106 "trtype": "tcp", 00:24:44.106 "traddr": "10.0.0.1", 00:24:44.106 "adrfam": "ipv4", 00:24:44.106 "trsvcid": "4420", 00:24:44.106 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:44.106 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:44.106 "prchk_reftag": false, 00:24:44.106 "prchk_guard": false, 00:24:44.106 "hdgst": false, 00:24:44.106 "ddgst": false, 00:24:44.106 "dhchap_key": "key1", 00:24:44.106 "dhchap_ctrlr_key": "ckey2", 00:24:44.106 "allow_unrecognized_csi": false, 00:24:44.106 "method": "bdev_nvme_attach_controller", 00:24:44.106 "req_id": 1 00:24:44.106 } 00:24:44.106 Got JSON-RPC error response 00:24:44.106 response: 00:24:44.106 { 00:24:44.106 "code": -5, 00:24:44.106 "message": "Input/output error" 00:24:44.106 } 00:24:44.106 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:44.106 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:24:44.106 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:44.106 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:44.106 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:44.106 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:24:44.106 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:44.106 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:44.106 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:44.106 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:44.106 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:44.106 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:44.106 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:44.106 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:44.106 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:44.106 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:44.106 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:44.106 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.106 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.106 nvme0n1 00:24:44.106 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.106 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:24:44.106 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:44.106 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:44.106 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:44.106 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:44.106 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjZjNjJiZWUyYjgwZjQ5MjcwZTc5MGJlMjA2ZjNhODjAEErr: 00:24:44.106 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDFhZDJjYjM4OWE3NDc3MDA5ZWI4NDUyNzkzZmJkNjH65HSp: 00:24:44.106 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:44.106 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:44.106 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjZjNjJiZWUyYjgwZjQ5MjcwZTc5MGJlMjA2ZjNhODjAEErr: 00:24:44.106 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDFhZDJjYjM4OWE3NDc3MDA5ZWI4NDUyNzkzZmJkNjH65HSp: ]] 00:24:44.106 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDFhZDJjYjM4OWE3NDc3MDA5ZWI4NDUyNzkzZmJkNjH65HSp: 00:24:44.106 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:44.106 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.106 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.106 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.106 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:24:44.106 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.106 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.106 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:24:44.106 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.364 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:44.364 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:44.364 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:24:44.364 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:44.364 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:44.364 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:44.364 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:44.364 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:44.364 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:44.364 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.364 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.364 request: 00:24:44.364 { 00:24:44.364 "name": "nvme0", 00:24:44.364 "dhchap_key": "key1", 00:24:44.364 "dhchap_ctrlr_key": "ckey2", 00:24:44.364 "method": "bdev_nvme_set_keys", 00:24:44.364 "req_id": 1 00:24:44.364 } 00:24:44.364 Got JSON-RPC error response 00:24:44.364 response: 00:24:44.364 { 00:24:44.364 "code": -13, 00:24:44.364 "message": "Permission denied" 00:24:44.364 } 00:24:44.364 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:44.364 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:24:44.364 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:44.364 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:44.364 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:44.364 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:24:44.364 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:24:44.364 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.364 15:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.364 15:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.364 15:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:24:44.364 15:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:24:45.298 15:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:24:45.298 15:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:24:45.298 15:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.298 15:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.298 15:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.298 15:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:24:45.298 15:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:24:46.670 15:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:24:46.670 15:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.670 15:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.670 15:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:24:46.670 15:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.670 15:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:24:46.670 15:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:46.670 15:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:46.670 15:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:46.670 15:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:46.670 15:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:46.670 15:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmE4YWI4YmUwOTc4YjA3OWY4NmRiM2Y5Njc2NTkxNzZjZjQ2MWEyMDlmNWZkZjg4uRzKQQ==: 00:24:46.671 15:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGVlYWY3NGM4ODYxYTFmMTg1ZmJiYjZhN2Q1MGZhNWU0OWIyOTU0OTA5OWJkMDg3bSpTUg==: 00:24:46.671 15:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:46.671 15:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:46.671 15:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmE4YWI4YmUwOTc4YjA3OWY4NmRiM2Y5Njc2NTkxNzZjZjQ2MWEyMDlmNWZkZjg4uRzKQQ==: 00:24:46.671 15:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGVlYWY3NGM4ODYxYTFmMTg1ZmJiYjZhN2Q1MGZhNWU0OWIyOTU0OTA5OWJkMDg3bSpTUg==: ]] 00:24:46.671 15:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGVlYWY3NGM4ODYxYTFmMTg1ZmJiYjZhN2Q1MGZhNWU0OWIyOTU0OTA5OWJkMDg3bSpTUg==: 00:24:46.671 15:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:24:46.671 15:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:46.671 15:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:46.671 15:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:46.671 15:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:46.671 15:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:46.671 15:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:46.671 15:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:46.671 15:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:46.671 15:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:46.671 15:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:46.671 15:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:46.671 15:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.671 15:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.671 nvme0n1 00:24:46.671 15:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.671 15:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:24:46.671 15:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:46.671 15:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:46.671 15:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:46.671 15:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:46.671 15:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjZjNjJiZWUyYjgwZjQ5MjcwZTc5MGJlMjA2ZjNhODjAEErr: 00:24:46.671 15:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDFhZDJjYjM4OWE3NDc3MDA5ZWI4NDUyNzkzZmJkNjH65HSp: 00:24:46.671 15:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:46.671 15:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:46.671 15:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjZjNjJiZWUyYjgwZjQ5MjcwZTc5MGJlMjA2ZjNhODjAEErr: 00:24:46.671 15:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDFhZDJjYjM4OWE3NDc3MDA5ZWI4NDUyNzkzZmJkNjH65HSp: ]] 00:24:46.671 15:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDFhZDJjYjM4OWE3NDc3MDA5ZWI4NDUyNzkzZmJkNjH65HSp: 00:24:46.671 15:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:24:46.671 15:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:24:46.671 15:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:24:46.671 15:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:46.671 15:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:46.671 15:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:46.671 15:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:46.671 15:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:24:46.671 15:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.671 15:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.671 request: 00:24:46.671 { 00:24:46.671 "name": "nvme0", 00:24:46.671 "dhchap_key": "key2", 00:24:46.671 "dhchap_ctrlr_key": "ckey1", 00:24:46.671 "method": "bdev_nvme_set_keys", 00:24:46.671 "req_id": 1 00:24:46.671 } 00:24:46.671 Got JSON-RPC error response 00:24:46.671 response: 00:24:46.671 { 00:24:46.671 "code": -13, 00:24:46.671 "message": "Permission denied" 00:24:46.671 } 00:24:46.671 15:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:46.671 15:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:24:46.671 15:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:46.671 15:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:46.671 15:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:46.671 15:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:24:46.671 15:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:24:46.671 15:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.671 15:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.671 15:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.671 15:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:24:46.671 15:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:24:48.047 15:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:24:48.048 15:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:24:48.048 15:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.048 15:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.048 15:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.048 15:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:24:48.048 15:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:24:48.048 15:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:24:48.048 15:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:24:48.048 15:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:48.048 15:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:24:48.048 15:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:48.048 15:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:24:48.048 15:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:48.048 15:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:48.048 rmmod nvme_tcp 00:24:48.048 rmmod nvme_fabrics 00:24:48.048 15:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:48.048 15:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:24:48.048 15:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:24:48.048 15:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 763866 ']' 00:24:48.048 15:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 763866 00:24:48.048 15:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 763866 ']' 00:24:48.048 15:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 763866 00:24:48.048 15:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:24:48.048 15:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:48.048 15:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 763866 00:24:48.048 15:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:48.048 15:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:48.048 15:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 763866' 00:24:48.048 killing process with pid 763866 00:24:48.048 15:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 763866 00:24:48.048 15:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 763866 00:24:48.048 15:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:48.048 15:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:48.048 15:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:48.048 15:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:24:48.048 15:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:24:48.048 15:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:48.048 15:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:24:48.048 15:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:48.048 15:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:48.048 15:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:48.048 15:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:48.048 15:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:50.586 15:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:50.586 15:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:24:50.586 15:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:50.586 15:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:24:50.586 15:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:24:50.586 15:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:24:50.586 15:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:50.586 15:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:50.586 15:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:50.586 15:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:50.586 15:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:24:50.586 15:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:24:50.586 15:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:51.521 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:24:51.521 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:24:51.521 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:24:51.521 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:24:51.521 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:24:51.521 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:24:51.521 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:24:51.521 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:24:51.521 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:24:51.521 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:24:51.521 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:24:51.521 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:24:51.521 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:24:51.521 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:24:51.521 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:24:51.521 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:24:52.460 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:24:52.460 15:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.Q5l /tmp/spdk.key-null.yP3 /tmp/spdk.key-sha256.gXs /tmp/spdk.key-sha384.kif /tmp/spdk.key-sha512.Z1S /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:24:52.460 15:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:53.835 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:24:53.835 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:24:53.835 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:24:53.835 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:24:53.835 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:24:53.835 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:24:53.835 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:24:53.835 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:24:53.835 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:24:53.835 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:24:53.835 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:24:53.835 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:24:53.835 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:24:53.835 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:24:53.835 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:24:53.835 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:24:53.835 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:24:53.835 00:24:53.835 real 0m50.993s 00:24:53.835 user 0m48.616s 00:24:53.835 sys 0m6.024s 00:24:53.835 15:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:53.835 15:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.835 ************************************ 00:24:53.835 END TEST nvmf_auth_host 00:24:53.835 ************************************ 00:24:53.835 15:01:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:24:53.835 15:01:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:24:53.835 15:01:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:53.835 15:01:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:53.835 15:01:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.835 ************************************ 00:24:53.835 START TEST nvmf_digest 00:24:53.835 ************************************ 00:24:53.835 15:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:24:53.835 * Looking for test storage... 00:24:53.835 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:53.835 15:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:53.835 15:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lcov --version 00:24:53.835 15:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:54.095 15:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:54.095 15:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:54.095 15:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:54.095 15:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:54.095 15:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:24:54.095 15:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:24:54.095 15:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:24:54.095 15:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:24:54.095 15:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:24:54.095 15:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:24:54.095 15:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:24:54.095 15:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:54.095 15:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:24:54.095 15:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:24:54.095 15:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:54.095 15:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:54.095 15:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:24:54.095 15:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:24:54.095 15:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:54.095 15:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:24:54.095 15:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:24:54.095 15:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:24:54.095 15:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:24:54.095 15:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:54.095 15:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:24:54.095 15:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:24:54.095 15:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:54.095 15:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:54.095 15:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:24:54.095 15:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:54.095 15:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:54.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:54.095 --rc genhtml_branch_coverage=1 00:24:54.095 --rc genhtml_function_coverage=1 00:24:54.095 --rc genhtml_legend=1 00:24:54.095 --rc geninfo_all_blocks=1 00:24:54.095 --rc geninfo_unexecuted_blocks=1 00:24:54.095 00:24:54.095 ' 00:24:54.095 15:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:54.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:54.095 --rc genhtml_branch_coverage=1 00:24:54.095 --rc genhtml_function_coverage=1 00:24:54.095 --rc genhtml_legend=1 00:24:54.095 --rc geninfo_all_blocks=1 00:24:54.095 --rc geninfo_unexecuted_blocks=1 00:24:54.095 00:24:54.095 ' 00:24:54.095 15:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:54.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:54.095 --rc genhtml_branch_coverage=1 00:24:54.095 --rc genhtml_function_coverage=1 00:24:54.095 --rc genhtml_legend=1 00:24:54.095 --rc geninfo_all_blocks=1 00:24:54.095 --rc geninfo_unexecuted_blocks=1 00:24:54.095 00:24:54.095 ' 00:24:54.095 15:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:54.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:54.095 --rc genhtml_branch_coverage=1 00:24:54.095 --rc genhtml_function_coverage=1 00:24:54.095 --rc genhtml_legend=1 00:24:54.095 --rc geninfo_all_blocks=1 00:24:54.095 --rc geninfo_unexecuted_blocks=1 00:24:54.095 00:24:54.095 ' 00:24:54.095 15:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:54.095 15:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:24:54.095 15:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:54.095 15:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:54.095 15:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:54.095 15:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:54.095 15:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:54.095 15:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:54.095 15:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:54.095 15:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:54.095 15:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:54.095 15:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:54.096 15:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:54.096 15:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:54.096 15:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:54.096 15:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:54.096 15:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:54.096 15:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:54.096 15:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:54.096 15:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:24:54.096 15:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:54.096 15:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:54.096 15:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:54.096 15:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.096 15:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.096 15:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.096 15:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:24:54.096 15:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.096 15:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:24:54.096 15:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:54.096 15:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:54.096 15:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:54.096 15:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:54.096 15:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:54.096 15:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:54.096 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:54.096 15:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:54.096 15:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:54.096 15:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:54.096 15:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:24:54.096 15:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:24:54.096 15:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:24:54.096 15:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:24:54.096 15:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:24:54.096 15:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:54.096 15:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:54.096 15:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:54.096 15:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:54.096 15:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:54.096 15:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:54.096 15:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:54.096 15:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:54.096 15:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:54.096 15:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:54.096 15:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:24:54.096 15:01:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:56.632 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:56.632 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:24:56.632 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:56.632 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:56.632 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:56.632 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:56.632 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:56.632 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:24:56.632 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:56.632 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:24:56.632 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:24:56.632 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:24:56.632 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:24:56.632 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:24:56.632 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:24:56.632 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:56.632 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:56.632 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:56.632 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:56.632 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:56.632 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:56.632 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:56.632 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:56.632 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:56.632 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:56.632 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:56.632 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:56.632 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:56.632 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:56.632 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:56.632 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:56.632 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:56.632 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:56.632 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:56.632 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:56.632 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:56.632 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:56.632 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:56.632 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:56.632 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:56.632 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:56.632 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:56.632 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:56.632 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:56.632 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:56.632 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:56.632 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:56.632 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:56.632 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:56.632 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:56.632 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:56.632 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:56.632 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:56.632 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:56.632 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:56.632 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:56.632 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:56.632 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:56.632 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:56.632 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:56.632 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:56.632 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:56.632 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:56.632 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:56.632 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:56.632 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:56.632 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:56.632 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:56.632 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:56.632 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:56.632 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:56.632 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:56.632 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:56.632 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:24:56.632 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:56.632 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:56.632 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:56.632 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:56.632 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:56.632 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:56.632 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:56.632 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:56.632 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:56.632 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:56.632 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:56.632 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:56.632 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:56.632 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:56.632 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:56.632 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:56.632 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:56.632 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:56.632 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:56.632 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:56.632 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:56.632 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:56.632 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:56.632 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:56.632 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:56.632 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:56.632 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:56.632 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.343 ms 00:24:56.632 00:24:56.632 --- 10.0.0.2 ping statistics --- 00:24:56.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:56.633 rtt min/avg/max/mdev = 0.343/0.343/0.343/0.000 ms 00:24:56.633 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:56.633 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:56.633 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:24:56.633 00:24:56.633 --- 10.0.0.1 ping statistics --- 00:24:56.633 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:56.633 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:24:56.633 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:56.633 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:24:56.633 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:56.633 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:56.633 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:56.633 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:56.633 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:56.633 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:56.633 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:56.633 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:24:56.633 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:24:56.633 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:24:56.633 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:56.633 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:56.633 15:01:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:56.633 ************************************ 00:24:56.633 START TEST nvmf_digest_clean 00:24:56.633 ************************************ 00:24:56.633 15:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:24:56.633 15:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:24:56.633 15:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:24:56.633 15:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:24:56.633 15:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:24:56.633 15:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:24:56.633 15:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:56.633 15:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:56.633 15:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:56.633 15:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=773469 00:24:56.633 15:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:56.633 15:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 773469 00:24:56.633 15:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 773469 ']' 00:24:56.633 15:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:56.633 15:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:56.633 15:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:56.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:56.633 15:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:56.633 15:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:56.633 [2024-12-11 15:01:39.071868] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:24:56.633 [2024-12-11 15:01:39.071970] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:56.633 [2024-12-11 15:01:39.145198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:56.633 [2024-12-11 15:01:39.199374] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:56.633 [2024-12-11 15:01:39.199432] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:56.633 [2024-12-11 15:01:39.199445] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:56.633 [2024-12-11 15:01:39.199455] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:56.633 [2024-12-11 15:01:39.199465] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:56.633 [2024-12-11 15:01:39.200040] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:24:56.633 15:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:56.633 15:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:24:56.633 15:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:56.633 15:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:56.633 15:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:56.633 15:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:56.633 15:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:24:56.633 15:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:24:56.633 15:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:24:56.633 15:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.633 15:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:56.892 null0 00:24:56.892 [2024-12-11 15:01:39.427151] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:56.892 [2024-12-11 15:01:39.451371] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:56.892 15:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.892 15:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:24:56.892 15:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:56.892 15:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:56.892 15:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:24:56.892 15:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:24:56.892 15:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:24:56.892 15:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:56.892 15:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=773495 00:24:56.892 15:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:24:56.892 15:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 773495 /var/tmp/bperf.sock 00:24:56.892 15:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 773495 ']' 00:24:56.892 15:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:56.892 15:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:56.892 15:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:56.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:56.892 15:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:56.892 15:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:56.892 [2024-12-11 15:01:39.498478] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:24:56.892 [2024-12-11 15:01:39.498567] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid773495 ] 00:24:56.892 [2024-12-11 15:01:39.563753] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:56.892 [2024-12-11 15:01:39.622170] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:24:57.150 15:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:57.150 15:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:24:57.150 15:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:57.150 15:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:57.150 15:01:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:57.408 15:01:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:57.408 15:01:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:57.974 nvme0n1 00:24:57.974 15:01:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:57.974 15:01:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:57.974 Running I/O for 2 seconds... 00:25:00.282 18768.00 IOPS, 73.31 MiB/s [2024-12-11T14:01:43.055Z] 19158.50 IOPS, 74.84 MiB/s 00:25:00.282 Latency(us) 00:25:00.282 [2024-12-11T14:01:43.055Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:00.282 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:00.282 nvme0n1 : 2.05 18782.03 73.37 0.00 0.00 6674.73 3665.16 46797.56 00:25:00.282 [2024-12-11T14:01:43.055Z] =================================================================================================================== 00:25:00.282 [2024-12-11T14:01:43.055Z] Total : 18782.03 73.37 0.00 0.00 6674.73 3665.16 46797.56 00:25:00.282 { 00:25:00.282 "results": [ 00:25:00.282 { 00:25:00.282 "job": "nvme0n1", 00:25:00.282 "core_mask": "0x2", 00:25:00.282 "workload": "randread", 00:25:00.282 "status": "finished", 00:25:00.282 "queue_depth": 128, 00:25:00.282 "io_size": 4096, 00:25:00.282 "runtime": 2.046903, 00:25:00.282 "iops": 18782.033149592335, 00:25:00.282 "mibps": 73.36731699059506, 00:25:00.282 "io_failed": 0, 00:25:00.282 "io_timeout": 0, 00:25:00.282 "avg_latency_us": 6674.726519058011, 00:25:00.282 "min_latency_us": 3665.1614814814816, 00:25:00.282 "max_latency_us": 46797.55851851852 00:25:00.282 } 00:25:00.282 ], 00:25:00.282 "core_count": 1 00:25:00.282 } 00:25:00.282 15:01:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:00.282 15:01:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:00.282 15:01:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:00.282 15:01:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:00.282 | select(.opcode=="crc32c") 00:25:00.282 | "\(.module_name) \(.executed)"' 00:25:00.282 15:01:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:00.282 15:01:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:00.282 15:01:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:00.282 15:01:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:00.282 15:01:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:00.282 15:01:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 773495 00:25:00.282 15:01:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 773495 ']' 00:25:00.282 15:01:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 773495 00:25:00.282 15:01:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:25:00.282 15:01:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:00.282 15:01:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 773495 00:25:00.282 15:01:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:00.282 15:01:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:00.282 15:01:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 773495' 00:25:00.282 killing process with pid 773495 00:25:00.282 15:01:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 773495 00:25:00.282 Received shutdown signal, test time was about 2.000000 seconds 00:25:00.282 00:25:00.282 Latency(us) 00:25:00.282 [2024-12-11T14:01:43.055Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:00.282 [2024-12-11T14:01:43.055Z] =================================================================================================================== 00:25:00.282 [2024-12-11T14:01:43.055Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:00.282 15:01:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 773495 00:25:00.540 15:01:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:25:00.540 15:01:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:00.540 15:01:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:00.540 15:01:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:25:00.540 15:01:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:25:00.540 15:01:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:25:00.540 15:01:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:00.540 15:01:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=774020 00:25:00.540 15:01:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:25:00.540 15:01:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 774020 /var/tmp/bperf.sock 00:25:00.540 15:01:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 774020 ']' 00:25:00.540 15:01:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:00.540 15:01:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:00.540 15:01:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:00.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:00.541 15:01:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:00.541 15:01:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:00.541 [2024-12-11 15:01:43.292628] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:25:00.541 [2024-12-11 15:01:43.292708] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid774020 ] 00:25:00.541 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:00.541 Zero copy mechanism will not be used. 00:25:00.799 [2024-12-11 15:01:43.359993] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:00.799 [2024-12-11 15:01:43.418437] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:25:00.799 15:01:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:00.799 15:01:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:25:00.799 15:01:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:00.799 15:01:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:00.799 15:01:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:01.365 15:01:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:01.365 15:01:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:01.622 nvme0n1 00:25:01.622 15:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:01.622 15:01:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:01.622 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:01.622 Zero copy mechanism will not be used. 00:25:01.622 Running I/O for 2 seconds... 00:25:03.927 5818.00 IOPS, 727.25 MiB/s [2024-12-11T14:01:46.700Z] 5932.00 IOPS, 741.50 MiB/s 00:25:03.927 Latency(us) 00:25:03.927 [2024-12-11T14:01:46.700Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:03.927 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:25:03.927 nvme0n1 : 2.00 5933.77 741.72 0.00 0.00 2692.13 713.01 12281.93 00:25:03.927 [2024-12-11T14:01:46.700Z] =================================================================================================================== 00:25:03.927 [2024-12-11T14:01:46.700Z] Total : 5933.77 741.72 0.00 0.00 2692.13 713.01 12281.93 00:25:03.927 { 00:25:03.927 "results": [ 00:25:03.927 { 00:25:03.927 "job": "nvme0n1", 00:25:03.927 "core_mask": "0x2", 00:25:03.927 "workload": "randread", 00:25:03.927 "status": "finished", 00:25:03.927 "queue_depth": 16, 00:25:03.927 "io_size": 131072, 00:25:03.927 "runtime": 2.002099, 00:25:03.927 "iops": 5933.77250575521, 00:25:03.927 "mibps": 741.7215632194012, 00:25:03.927 "io_failed": 0, 00:25:03.927 "io_timeout": 0, 00:25:03.927 "avg_latency_us": 2692.129201147275, 00:25:03.927 "min_latency_us": 713.0074074074074, 00:25:03.927 "max_latency_us": 12281.931851851852 00:25:03.927 } 00:25:03.927 ], 00:25:03.927 "core_count": 1 00:25:03.927 } 00:25:03.927 15:01:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:03.927 15:01:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:03.927 15:01:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:03.927 15:01:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:03.927 | select(.opcode=="crc32c") 00:25:03.927 | "\(.module_name) \(.executed)"' 00:25:03.927 15:01:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:04.186 15:01:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:04.186 15:01:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:04.186 15:01:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:04.186 15:01:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:04.186 15:01:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 774020 00:25:04.186 15:01:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 774020 ']' 00:25:04.186 15:01:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 774020 00:25:04.186 15:01:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:25:04.186 15:01:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:04.186 15:01:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 774020 00:25:04.186 15:01:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:04.186 15:01:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:04.186 15:01:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 774020' 00:25:04.186 killing process with pid 774020 00:25:04.186 15:01:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 774020 00:25:04.186 Received shutdown signal, test time was about 2.000000 seconds 00:25:04.186 00:25:04.186 Latency(us) 00:25:04.186 [2024-12-11T14:01:46.959Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:04.186 [2024-12-11T14:01:46.959Z] =================================================================================================================== 00:25:04.186 [2024-12-11T14:01:46.959Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:04.186 15:01:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 774020 00:25:04.445 15:01:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:25:04.445 15:01:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:04.445 15:01:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:04.445 15:01:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:25:04.445 15:01:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:25:04.445 15:01:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:25:04.445 15:01:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:04.445 15:01:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=774432 00:25:04.445 15:01:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:25:04.445 15:01:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 774432 /var/tmp/bperf.sock 00:25:04.445 15:01:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 774432 ']' 00:25:04.445 15:01:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:04.445 15:01:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:04.445 15:01:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:04.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:04.445 15:01:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:04.445 15:01:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:04.445 [2024-12-11 15:01:47.019516] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:25:04.445 [2024-12-11 15:01:47.019629] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid774432 ] 00:25:04.445 [2024-12-11 15:01:47.088156] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:04.445 [2024-12-11 15:01:47.146731] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:25:04.703 15:01:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:04.703 15:01:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:25:04.703 15:01:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:04.703 15:01:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:04.703 15:01:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:04.961 15:01:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:04.961 15:01:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:05.219 nvme0n1 00:25:05.219 15:01:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:05.219 15:01:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:05.477 Running I/O for 2 seconds... 00:25:07.344 18123.00 IOPS, 70.79 MiB/s [2024-12-11T14:01:50.375Z] 18233.50 IOPS, 71.22 MiB/s 00:25:07.602 Latency(us) 00:25:07.602 [2024-12-11T14:01:50.375Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:07.602 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:07.602 nvme0n1 : 2.01 18232.98 71.22 0.00 0.00 7003.13 5388.52 16990.81 00:25:07.602 [2024-12-11T14:01:50.375Z] =================================================================================================================== 00:25:07.602 [2024-12-11T14:01:50.375Z] Total : 18232.98 71.22 0.00 0.00 7003.13 5388.52 16990.81 00:25:07.602 { 00:25:07.602 "results": [ 00:25:07.602 { 00:25:07.602 "job": "nvme0n1", 00:25:07.602 "core_mask": "0x2", 00:25:07.602 "workload": "randwrite", 00:25:07.602 "status": "finished", 00:25:07.602 "queue_depth": 128, 00:25:07.602 "io_size": 4096, 00:25:07.602 "runtime": 2.008832, 00:25:07.602 "iops": 18232.983146425384, 00:25:07.602 "mibps": 71.22259041572416, 00:25:07.602 "io_failed": 0, 00:25:07.602 "io_timeout": 0, 00:25:07.602 "avg_latency_us": 7003.1336898806685, 00:25:07.602 "min_latency_us": 5388.515555555556, 00:25:07.602 "max_latency_us": 16990.814814814814 00:25:07.602 } 00:25:07.602 ], 00:25:07.602 "core_count": 1 00:25:07.602 } 00:25:07.602 15:01:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:07.602 15:01:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:07.602 15:01:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:07.602 15:01:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:07.602 15:01:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:07.602 | select(.opcode=="crc32c") 00:25:07.602 | "\(.module_name) \(.executed)"' 00:25:07.860 15:01:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:07.860 15:01:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:07.860 15:01:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:07.860 15:01:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:07.860 15:01:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 774432 00:25:07.860 15:01:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 774432 ']' 00:25:07.860 15:01:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 774432 00:25:07.860 15:01:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:25:07.860 15:01:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:07.860 15:01:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 774432 00:25:07.860 15:01:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:07.860 15:01:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:07.860 15:01:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 774432' 00:25:07.860 killing process with pid 774432 00:25:07.860 15:01:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 774432 00:25:07.860 Received shutdown signal, test time was about 2.000000 seconds 00:25:07.860 00:25:07.860 Latency(us) 00:25:07.860 [2024-12-11T14:01:50.633Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:07.860 [2024-12-11T14:01:50.633Z] =================================================================================================================== 00:25:07.860 [2024-12-11T14:01:50.633Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:07.860 15:01:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 774432 00:25:08.118 15:01:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:25:08.118 15:01:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:08.118 15:01:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:08.118 15:01:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:25:08.118 15:01:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:25:08.118 15:01:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:25:08.118 15:01:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:08.118 15:01:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=774841 00:25:08.118 15:01:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:25:08.118 15:01:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 774841 /var/tmp/bperf.sock 00:25:08.118 15:01:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 774841 ']' 00:25:08.118 15:01:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:08.118 15:01:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:08.118 15:01:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:08.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:08.118 15:01:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:08.118 15:01:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:08.119 [2024-12-11 15:01:50.734423] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:25:08.119 [2024-12-11 15:01:50.734505] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid774841 ] 00:25:08.119 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:08.119 Zero copy mechanism will not be used. 00:25:08.119 [2024-12-11 15:01:50.804471] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:08.119 [2024-12-11 15:01:50.865760] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:25:08.377 15:01:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:08.377 15:01:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:25:08.377 15:01:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:08.377 15:01:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:08.377 15:01:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:08.635 15:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:08.635 15:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:09.201 nvme0n1 00:25:09.202 15:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:09.202 15:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:09.202 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:09.202 Zero copy mechanism will not be used. 00:25:09.202 Running I/O for 2 seconds... 00:25:11.507 5569.00 IOPS, 696.12 MiB/s [2024-12-11T14:01:54.280Z] 6414.50 IOPS, 801.81 MiB/s 00:25:11.507 Latency(us) 00:25:11.507 [2024-12-11T14:01:54.280Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:11.507 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:25:11.507 nvme0n1 : 2.00 6412.84 801.60 0.00 0.00 2488.27 1796.17 9757.58 00:25:11.507 [2024-12-11T14:01:54.280Z] =================================================================================================================== 00:25:11.507 [2024-12-11T14:01:54.280Z] Total : 6412.84 801.60 0.00 0.00 2488.27 1796.17 9757.58 00:25:11.507 { 00:25:11.507 "results": [ 00:25:11.507 { 00:25:11.507 "job": "nvme0n1", 00:25:11.507 "core_mask": "0x2", 00:25:11.507 "workload": "randwrite", 00:25:11.507 "status": "finished", 00:25:11.507 "queue_depth": 16, 00:25:11.507 "io_size": 131072, 00:25:11.507 "runtime": 2.003014, 00:25:11.507 "iops": 6412.835856364459, 00:25:11.507 "mibps": 801.6044820455573, 00:25:11.507 "io_failed": 0, 00:25:11.507 "io_timeout": 0, 00:25:11.507 "avg_latency_us": 2488.267650707149, 00:25:11.507 "min_latency_us": 1796.171851851852, 00:25:11.507 "max_latency_us": 9757.582222222221 00:25:11.507 } 00:25:11.507 ], 00:25:11.507 "core_count": 1 00:25:11.507 } 00:25:11.507 15:01:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:11.507 15:01:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:11.507 15:01:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:11.507 15:01:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:11.507 15:01:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:11.507 | select(.opcode=="crc32c") 00:25:11.507 | "\(.module_name) \(.executed)"' 00:25:11.507 15:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:11.507 15:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:11.507 15:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:11.507 15:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:11.507 15:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 774841 00:25:11.507 15:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 774841 ']' 00:25:11.507 15:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 774841 00:25:11.507 15:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:25:11.507 15:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:11.507 15:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 774841 00:25:11.507 15:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:11.507 15:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:11.507 15:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 774841' 00:25:11.507 killing process with pid 774841 00:25:11.507 15:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 774841 00:25:11.507 Received shutdown signal, test time was about 2.000000 seconds 00:25:11.507 00:25:11.507 Latency(us) 00:25:11.507 [2024-12-11T14:01:54.280Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:11.507 [2024-12-11T14:01:54.280Z] =================================================================================================================== 00:25:11.507 [2024-12-11T14:01:54.280Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:11.507 15:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 774841 00:25:11.765 15:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 773469 00:25:11.765 15:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 773469 ']' 00:25:11.765 15:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 773469 00:25:11.765 15:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:25:11.765 15:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:11.765 15:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 773469 00:25:11.765 15:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:11.765 15:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:11.765 15:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 773469' 00:25:11.765 killing process with pid 773469 00:25:11.765 15:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 773469 00:25:11.765 15:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 773469 00:25:12.023 00:25:12.023 real 0m15.727s 00:25:12.023 user 0m31.583s 00:25:12.023 sys 0m4.305s 00:25:12.023 15:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:12.023 15:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:12.023 ************************************ 00:25:12.023 END TEST nvmf_digest_clean 00:25:12.023 ************************************ 00:25:12.023 15:01:54 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:25:12.023 15:01:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:12.023 15:01:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:12.023 15:01:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:12.282 ************************************ 00:25:12.282 START TEST nvmf_digest_error 00:25:12.282 ************************************ 00:25:12.282 15:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:25:12.282 15:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:25:12.282 15:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:12.282 15:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:12.282 15:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:12.282 15:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=775391 00:25:12.282 15:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:12.282 15:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 775391 00:25:12.282 15:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 775391 ']' 00:25:12.282 15:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:12.282 15:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:12.282 15:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:12.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:12.282 15:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:12.282 15:01:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:12.282 [2024-12-11 15:01:54.849720] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:25:12.282 [2024-12-11 15:01:54.849800] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:12.282 [2024-12-11 15:01:54.924225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:12.282 [2024-12-11 15:01:54.979610] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:12.282 [2024-12-11 15:01:54.979681] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:12.282 [2024-12-11 15:01:54.979694] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:12.282 [2024-12-11 15:01:54.979719] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:12.282 [2024-12-11 15:01:54.979729] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:12.282 [2024-12-11 15:01:54.980305] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:25:12.540 15:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:12.540 15:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:25:12.540 15:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:12.541 15:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:12.541 15:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:12.541 15:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:12.541 15:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:25:12.541 15:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.541 15:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:12.541 [2024-12-11 15:01:55.093044] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:25:12.541 15:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.541 15:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:25:12.541 15:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:25:12.541 15:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.541 15:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:12.541 null0 00:25:12.541 [2024-12-11 15:01:55.212576] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:12.541 [2024-12-11 15:01:55.236860] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:12.541 15:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.541 15:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:25:12.541 15:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:12.541 15:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:25:12.541 15:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:25:12.541 15:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:25:12.541 15:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=775422 00:25:12.541 15:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 775422 /var/tmp/bperf.sock 00:25:12.541 15:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 775422 ']' 00:25:12.541 15:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:12.541 15:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:25:12.541 15:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:12.541 15:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:12.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:12.541 15:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:12.541 15:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:12.541 [2024-12-11 15:01:55.286025] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:25:12.541 [2024-12-11 15:01:55.286103] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid775422 ] 00:25:12.799 [2024-12-11 15:01:55.354295] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:12.799 [2024-12-11 15:01:55.413313] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:25:12.799 15:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:12.799 15:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:25:12.799 15:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:12.799 15:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:13.057 15:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:13.057 15:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.057 15:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:13.057 15:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.057 15:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:13.057 15:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:13.622 nvme0n1 00:25:13.622 15:01:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:25:13.622 15:01:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.622 15:01:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:13.622 15:01:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.622 15:01:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:13.622 15:01:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:13.622 Running I/O for 2 seconds... 00:25:13.622 [2024-12-11 15:01:56.384257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:13.622 [2024-12-11 15:01:56.384312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:1216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.622 [2024-12-11 15:01:56.384334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.881 [2024-12-11 15:01:56.399978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:13.881 [2024-12-11 15:01:56.400008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:7256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.881 [2024-12-11 15:01:56.400040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.881 [2024-12-11 15:01:56.415595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:13.881 [2024-12-11 15:01:56.415627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:17483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.881 [2024-12-11 15:01:56.415646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.881 [2024-12-11 15:01:56.428056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:13.881 [2024-12-11 15:01:56.428085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:18388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.881 [2024-12-11 15:01:56.428117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.881 [2024-12-11 15:01:56.441937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:13.881 [2024-12-11 15:01:56.441966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:17147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.881 [2024-12-11 15:01:56.441998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.881 [2024-12-11 15:01:56.455575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:13.881 [2024-12-11 15:01:56.455620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:24129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.881 [2024-12-11 15:01:56.455638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.881 [2024-12-11 15:01:56.468439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:13.881 [2024-12-11 15:01:56.468468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:18377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.881 [2024-12-11 15:01:56.468500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.881 [2024-12-11 15:01:56.481757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:13.881 [2024-12-11 15:01:56.481789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:24956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.881 [2024-12-11 15:01:56.481807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.881 [2024-12-11 15:01:56.495481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:13.881 [2024-12-11 15:01:56.495536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:18032 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.881 [2024-12-11 15:01:56.495562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.881 [2024-12-11 15:01:56.509699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:13.881 [2024-12-11 15:01:56.509735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.881 [2024-12-11 15:01:56.509754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.881 [2024-12-11 15:01:56.520986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:13.881 [2024-12-11 15:01:56.521016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:17887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.881 [2024-12-11 15:01:56.521048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.881 [2024-12-11 15:01:56.535979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:13.881 [2024-12-11 15:01:56.536011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:1989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.881 [2024-12-11 15:01:56.536029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.881 [2024-12-11 15:01:56.551523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:13.881 [2024-12-11 15:01:56.551558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:11689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.881 [2024-12-11 15:01:56.551593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.881 [2024-12-11 15:01:56.564969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:13.881 [2024-12-11 15:01:56.565000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:17748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.881 [2024-12-11 15:01:56.565017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.881 [2024-12-11 15:01:56.577164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:13.881 [2024-12-11 15:01:56.577207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:22130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.881 [2024-12-11 15:01:56.577225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.882 [2024-12-11 15:01:56.592965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:13.882 [2024-12-11 15:01:56.592997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:4495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.882 [2024-12-11 15:01:56.593015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.882 [2024-12-11 15:01:56.607910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:13.882 [2024-12-11 15:01:56.607941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.882 [2024-12-11 15:01:56.607960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.882 [2024-12-11 15:01:56.619179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:13.882 [2024-12-11 15:01:56.619208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:22460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.882 [2024-12-11 15:01:56.619239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.882 [2024-12-11 15:01:56.633002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:13.882 [2024-12-11 15:01:56.633033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:1684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.882 [2024-12-11 15:01:56.633051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:13.882 [2024-12-11 15:01:56.646692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:13.882 [2024-12-11 15:01:56.646737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:13.882 [2024-12-11 15:01:56.646755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.140 [2024-12-11 15:01:56.658093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:14.141 [2024-12-11 15:01:56.658123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:23174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.141 [2024-12-11 15:01:56.658155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.141 [2024-12-11 15:01:56.674625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:14.141 [2024-12-11 15:01:56.674657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:16992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.141 [2024-12-11 15:01:56.674689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.141 [2024-12-11 15:01:56.688707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:14.141 [2024-12-11 15:01:56.688740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:10889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.141 [2024-12-11 15:01:56.688758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.141 [2024-12-11 15:01:56.700845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:14.141 [2024-12-11 15:01:56.700890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.141 [2024-12-11 15:01:56.700907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.141 [2024-12-11 15:01:56.716698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:14.141 [2024-12-11 15:01:56.716731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:5169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.141 [2024-12-11 15:01:56.716750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.141 [2024-12-11 15:01:56.729485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:14.141 [2024-12-11 15:01:56.729517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:15124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.141 [2024-12-11 15:01:56.729551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.141 [2024-12-11 15:01:56.744424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:14.141 [2024-12-11 15:01:56.744471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.141 [2024-12-11 15:01:56.744489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.141 [2024-12-11 15:01:56.759987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:14.141 [2024-12-11 15:01:56.760019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20731 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.141 [2024-12-11 15:01:56.760037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.141 [2024-12-11 15:01:56.772572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:14.141 [2024-12-11 15:01:56.772622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.141 [2024-12-11 15:01:56.772642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.141 [2024-12-11 15:01:56.784196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:14.141 [2024-12-11 15:01:56.784225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.141 [2024-12-11 15:01:56.784256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.141 [2024-12-11 15:01:56.796851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:14.141 [2024-12-11 15:01:56.796883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:21227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.141 [2024-12-11 15:01:56.796900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.141 [2024-12-11 15:01:56.809926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:14.141 [2024-12-11 15:01:56.809954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.141 [2024-12-11 15:01:56.809987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.141 [2024-12-11 15:01:56.825793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:14.141 [2024-12-11 15:01:56.825825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:5243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.141 [2024-12-11 15:01:56.825843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.141 [2024-12-11 15:01:56.840442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:14.141 [2024-12-11 15:01:56.840473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:15485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.141 [2024-12-11 15:01:56.840490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.141 [2024-12-11 15:01:56.857448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:14.141 [2024-12-11 15:01:56.857495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:1487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.141 [2024-12-11 15:01:56.857513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.141 [2024-12-11 15:01:56.869278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:14.141 [2024-12-11 15:01:56.869309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:21432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.141 [2024-12-11 15:01:56.869327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.141 [2024-12-11 15:01:56.882464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:14.141 [2024-12-11 15:01:56.882509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:8767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.141 [2024-12-11 15:01:56.882526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.141 [2024-12-11 15:01:56.895474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:14.141 [2024-12-11 15:01:56.895504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:19795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.141 [2024-12-11 15:01:56.895536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.141 [2024-12-11 15:01:56.908102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:14.141 [2024-12-11 15:01:56.908133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.141 [2024-12-11 15:01:56.908150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.399 [2024-12-11 15:01:56.923034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:14.400 [2024-12-11 15:01:56.923079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:10587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.400 [2024-12-11 15:01:56.923097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.400 [2024-12-11 15:01:56.940487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:14.400 [2024-12-11 15:01:56.940517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:11994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.400 [2024-12-11 15:01:56.940555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.400 [2024-12-11 15:01:56.950831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:14.400 [2024-12-11 15:01:56.950874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:10359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.400 [2024-12-11 15:01:56.950890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.400 [2024-12-11 15:01:56.966633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:14.400 [2024-12-11 15:01:56.966679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:19469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.400 [2024-12-11 15:01:56.966704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.400 [2024-12-11 15:01:56.981746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:14.400 [2024-12-11 15:01:56.981777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:17882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.400 [2024-12-11 15:01:56.981795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.400 [2024-12-11 15:01:56.996856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:14.400 [2024-12-11 15:01:56.996888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:21076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.400 [2024-12-11 15:01:56.996905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.400 [2024-12-11 15:01:57.013086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:14.400 [2024-12-11 15:01:57.013118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.400 [2024-12-11 15:01:57.013135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.400 [2024-12-11 15:01:57.026956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:14.400 [2024-12-11 15:01:57.027000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:3840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.400 [2024-12-11 15:01:57.027019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.400 [2024-12-11 15:01:57.044248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:14.400 [2024-12-11 15:01:57.044277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:2454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.400 [2024-12-11 15:01:57.044310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.400 [2024-12-11 15:01:57.058591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:14.400 [2024-12-11 15:01:57.058621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:20202 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.400 [2024-12-11 15:01:57.058638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.400 [2024-12-11 15:01:57.074530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:14.400 [2024-12-11 15:01:57.074570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.400 [2024-12-11 15:01:57.074588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.400 [2024-12-11 15:01:57.085090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:14.400 [2024-12-11 15:01:57.085119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:8413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.400 [2024-12-11 15:01:57.085151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.400 [2024-12-11 15:01:57.099623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:14.400 [2024-12-11 15:01:57.099660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:6046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.400 [2024-12-11 15:01:57.099692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.400 [2024-12-11 15:01:57.113665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:14.400 [2024-12-11 15:01:57.113712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:23303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.400 [2024-12-11 15:01:57.113729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.400 [2024-12-11 15:01:57.131519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:14.400 [2024-12-11 15:01:57.131554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.400 [2024-12-11 15:01:57.131589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.400 [2024-12-11 15:01:57.142183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:14.400 [2024-12-11 15:01:57.142228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:7018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.400 [2024-12-11 15:01:57.142244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.400 [2024-12-11 15:01:57.156699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:14.400 [2024-12-11 15:01:57.156729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:7091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.400 [2024-12-11 15:01:57.156762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.659 [2024-12-11 15:01:57.172583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:14.659 [2024-12-11 15:01:57.172615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:24392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.659 [2024-12-11 15:01:57.172634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.659 [2024-12-11 15:01:57.186317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:14.659 [2024-12-11 15:01:57.186362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:17671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.659 [2024-12-11 15:01:57.186379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.659 [2024-12-11 15:01:57.199224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:14.659 [2024-12-11 15:01:57.199254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:16793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.659 [2024-12-11 15:01:57.199272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.659 [2024-12-11 15:01:57.213900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:14.659 [2024-12-11 15:01:57.213929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:17473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.659 [2024-12-11 15:01:57.213962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.659 [2024-12-11 15:01:57.228106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:14.659 [2024-12-11 15:01:57.228137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:2167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.659 [2024-12-11 15:01:57.228154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.659 [2024-12-11 15:01:57.244105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:14.659 [2024-12-11 15:01:57.244134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:3497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.659 [2024-12-11 15:01:57.244165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.659 [2024-12-11 15:01:57.258667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:14.659 [2024-12-11 15:01:57.258699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:10639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.659 [2024-12-11 15:01:57.258717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.659 [2024-12-11 15:01:57.270653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:14.659 [2024-12-11 15:01:57.270682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:2155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.659 [2024-12-11 15:01:57.270715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.659 [2024-12-11 15:01:57.283076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:14.659 [2024-12-11 15:01:57.283105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:24976 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.659 [2024-12-11 15:01:57.283137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.659 [2024-12-11 15:01:57.297262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:14.659 [2024-12-11 15:01:57.297293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:21910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.659 [2024-12-11 15:01:57.297326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.659 [2024-12-11 15:01:57.312366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:14.659 [2024-12-11 15:01:57.312397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:19102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.659 [2024-12-11 15:01:57.312415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.659 [2024-12-11 15:01:57.323990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:14.659 [2024-12-11 15:01:57.324017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.659 [2024-12-11 15:01:57.324049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.659 [2024-12-11 15:01:57.337380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:14.659 [2024-12-11 15:01:57.337410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:15807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.659 [2024-12-11 15:01:57.337451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.659 [2024-12-11 15:01:57.351982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:14.659 [2024-12-11 15:01:57.352012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:2871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.659 [2024-12-11 15:01:57.352045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.659 18103.00 IOPS, 70.71 MiB/s [2024-12-11T14:01:57.432Z] [2024-12-11 15:01:57.368023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:14.659 [2024-12-11 15:01:57.368066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.659 [2024-12-11 15:01:57.368083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.659 [2024-12-11 15:01:57.380024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:14.659 [2024-12-11 15:01:57.380052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:6012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.659 [2024-12-11 15:01:57.380083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.659 [2024-12-11 15:01:57.394400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:14.659 [2024-12-11 15:01:57.394429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:9821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.659 [2024-12-11 15:01:57.394461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.659 [2024-12-11 15:01:57.408278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:14.659 [2024-12-11 15:01:57.408308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:13476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.659 [2024-12-11 15:01:57.408343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.659 [2024-12-11 15:01:57.425618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:14.659 [2024-12-11 15:01:57.425649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:25415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.659 [2024-12-11 15:01:57.425681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.918 [2024-12-11 15:01:57.440962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:14.918 [2024-12-11 15:01:57.440995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:7699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.918 [2024-12-11 15:01:57.441013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.918 [2024-12-11 15:01:57.455170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:14.918 [2024-12-11 15:01:57.455201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:20730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.918 [2024-12-11 15:01:57.455219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.918 [2024-12-11 15:01:57.469413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:14.918 [2024-12-11 15:01:57.469444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:24995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.918 [2024-12-11 15:01:57.469478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.918 [2024-12-11 15:01:57.480979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:14.918 [2024-12-11 15:01:57.481006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:5497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.918 [2024-12-11 15:01:57.481036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.918 [2024-12-11 15:01:57.493853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:14.918 [2024-12-11 15:01:57.493896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.918 [2024-12-11 15:01:57.493912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.918 [2024-12-11 15:01:57.506320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:14.918 [2024-12-11 15:01:57.506348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:19789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.918 [2024-12-11 15:01:57.506378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.918 [2024-12-11 15:01:57.522584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:14.918 [2024-12-11 15:01:57.522624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.918 [2024-12-11 15:01:57.522657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.918 [2024-12-11 15:01:57.536092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:14.918 [2024-12-11 15:01:57.536122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:1522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.918 [2024-12-11 15:01:57.536156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.918 [2024-12-11 15:01:57.548481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:14.918 [2024-12-11 15:01:57.548509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.918 [2024-12-11 15:01:57.548539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.918 [2024-12-11 15:01:57.561801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:14.918 [2024-12-11 15:01:57.561830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:10052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.918 [2024-12-11 15:01:57.561864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.918 [2024-12-11 15:01:57.574399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:14.918 [2024-12-11 15:01:57.574430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:7657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.918 [2024-12-11 15:01:57.574470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.918 [2024-12-11 15:01:57.590378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:14.918 [2024-12-11 15:01:57.590406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:17398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.918 [2024-12-11 15:01:57.590436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.918 [2024-12-11 15:01:57.605281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:14.918 [2024-12-11 15:01:57.605311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:21142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.918 [2024-12-11 15:01:57.605345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.918 [2024-12-11 15:01:57.618429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:14.918 [2024-12-11 15:01:57.618473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:9466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.918 [2024-12-11 15:01:57.618490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.918 [2024-12-11 15:01:57.632914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:14.918 [2024-12-11 15:01:57.632945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:18222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.918 [2024-12-11 15:01:57.632964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.918 [2024-12-11 15:01:57.648892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:14.918 [2024-12-11 15:01:57.648937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:16377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.918 [2024-12-11 15:01:57.648954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.918 [2024-12-11 15:01:57.660861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:14.918 [2024-12-11 15:01:57.660905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:7467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.918 [2024-12-11 15:01:57.660921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.918 [2024-12-11 15:01:57.673194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:14.918 [2024-12-11 15:01:57.673221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:20667 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.918 [2024-12-11 15:01:57.673252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.918 [2024-12-11 15:01:57.686680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:14.918 [2024-12-11 15:01:57.686712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:24579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.918 [2024-12-11 15:01:57.686730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.182 [2024-12-11 15:01:57.699914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:15.182 [2024-12-11 15:01:57.699965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.182 [2024-12-11 15:01:57.699983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.182 [2024-12-11 15:01:57.712534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:15.182 [2024-12-11 15:01:57.712584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:4375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.182 [2024-12-11 15:01:57.712601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.182 [2024-12-11 15:01:57.725142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:15.183 [2024-12-11 15:01:57.725170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.183 [2024-12-11 15:01:57.725202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.183 [2024-12-11 15:01:57.737702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:15.183 [2024-12-11 15:01:57.737731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:3745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.183 [2024-12-11 15:01:57.737748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.183 [2024-12-11 15:01:57.750686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:15.183 [2024-12-11 15:01:57.750716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:18872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.183 [2024-12-11 15:01:57.750749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.183 [2024-12-11 15:01:57.767201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:15.183 [2024-12-11 15:01:57.767245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.183 [2024-12-11 15:01:57.767262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.183 [2024-12-11 15:01:57.780167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:15.183 [2024-12-11 15:01:57.780197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.183 [2024-12-11 15:01:57.780230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.183 [2024-12-11 15:01:57.795971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:15.183 [2024-12-11 15:01:57.796016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:1775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.183 [2024-12-11 15:01:57.796034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.183 [2024-12-11 15:01:57.807278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:15.183 [2024-12-11 15:01:57.807309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:10316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.183 [2024-12-11 15:01:57.807327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.183 [2024-12-11 15:01:57.821417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:15.183 [2024-12-11 15:01:57.821446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:5856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.183 [2024-12-11 15:01:57.821476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.183 [2024-12-11 15:01:57.833850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:15.183 [2024-12-11 15:01:57.833879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:5117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.183 [2024-12-11 15:01:57.833895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.183 [2024-12-11 15:01:57.847639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:15.183 [2024-12-11 15:01:57.847670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:8324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.183 [2024-12-11 15:01:57.847703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.183 [2024-12-11 15:01:57.863226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:15.183 [2024-12-11 15:01:57.863254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:25228 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.183 [2024-12-11 15:01:57.863285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.183 [2024-12-11 15:01:57.873770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:15.183 [2024-12-11 15:01:57.873799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:19309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.183 [2024-12-11 15:01:57.873831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.183 [2024-12-11 15:01:57.889154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:15.183 [2024-12-11 15:01:57.889182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:19610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.183 [2024-12-11 15:01:57.889214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.183 [2024-12-11 15:01:57.904245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:15.183 [2024-12-11 15:01:57.904290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:14504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.183 [2024-12-11 15:01:57.904306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.183 [2024-12-11 15:01:57.915487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:15.183 [2024-12-11 15:01:57.915517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.183 [2024-12-11 15:01:57.915557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.183 [2024-12-11 15:01:57.929707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:15.183 [2024-12-11 15:01:57.929739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.183 [2024-12-11 15:01:57.929763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.183 [2024-12-11 15:01:57.942538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:15.183 [2024-12-11 15:01:57.942595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:11147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.183 [2024-12-11 15:01:57.942613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.467 [2024-12-11 15:01:57.955980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:15.467 [2024-12-11 15:01:57.956014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:23635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.467 [2024-12-11 15:01:57.956032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.467 [2024-12-11 15:01:57.972240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:15.467 [2024-12-11 15:01:57.972268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:5329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.467 [2024-12-11 15:01:57.972299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.467 [2024-12-11 15:01:57.982614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:15.467 [2024-12-11 15:01:57.982643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.467 [2024-12-11 15:01:57.982675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.467 [2024-12-11 15:01:57.998959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:15.467 [2024-12-11 15:01:57.998990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:20027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.467 [2024-12-11 15:01:57.999008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.467 [2024-12-11 15:01:58.013222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:15.467 [2024-12-11 15:01:58.013250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:13866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.467 [2024-12-11 15:01:58.013265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.467 [2024-12-11 15:01:58.029020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:15.467 [2024-12-11 15:01:58.029048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:10028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.467 [2024-12-11 15:01:58.029063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.467 [2024-12-11 15:01:58.043330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:15.467 [2024-12-11 15:01:58.043362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.467 [2024-12-11 15:01:58.043380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.467 [2024-12-11 15:01:58.054653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:15.467 [2024-12-11 15:01:58.054681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:6215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.467 [2024-12-11 15:01:58.054713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.467 [2024-12-11 15:01:58.068945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:15.467 [2024-12-11 15:01:58.068975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:4231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.467 [2024-12-11 15:01:58.069006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.467 [2024-12-11 15:01:58.084068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:15.467 [2024-12-11 15:01:58.084095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:5603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.467 [2024-12-11 15:01:58.084126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.467 [2024-12-11 15:01:58.099126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:15.467 [2024-12-11 15:01:58.099153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:8887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.468 [2024-12-11 15:01:58.099184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.468 [2024-12-11 15:01:58.109685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:15.468 [2024-12-11 15:01:58.109716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:16761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.468 [2024-12-11 15:01:58.109735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.468 [2024-12-11 15:01:58.124822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:15.468 [2024-12-11 15:01:58.124851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:6001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.468 [2024-12-11 15:01:58.124867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.468 [2024-12-11 15:01:58.139715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:15.468 [2024-12-11 15:01:58.139745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:1559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.468 [2024-12-11 15:01:58.139778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.468 [2024-12-11 15:01:58.156819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:15.468 [2024-12-11 15:01:58.156863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.468 [2024-12-11 15:01:58.156880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.468 [2024-12-11 15:01:58.167246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:15.468 [2024-12-11 15:01:58.167273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:9164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.468 [2024-12-11 15:01:58.167310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.468 [2024-12-11 15:01:58.182113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:15.468 [2024-12-11 15:01:58.182155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.468 [2024-12-11 15:01:58.182173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.468 [2024-12-11 15:01:58.195839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:15.468 [2024-12-11 15:01:58.195883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:5660 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.468 [2024-12-11 15:01:58.195899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.468 [2024-12-11 15:01:58.207110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:15.468 [2024-12-11 15:01:58.207155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:22966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.468 [2024-12-11 15:01:58.207173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.468 [2024-12-11 15:01:58.222610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:15.468 [2024-12-11 15:01:58.222639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:1389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.468 [2024-12-11 15:01:58.222669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.758 [2024-12-11 15:01:58.237658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:15.758 [2024-12-11 15:01:58.237704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:4986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.758 [2024-12-11 15:01:58.237723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.758 [2024-12-11 15:01:58.251376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:15.758 [2024-12-11 15:01:58.251421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:24440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.758 [2024-12-11 15:01:58.251439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.758 [2024-12-11 15:01:58.264979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:15.758 [2024-12-11 15:01:58.265022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:7816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.758 [2024-12-11 15:01:58.265039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.758 [2024-12-11 15:01:58.277335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:15.758 [2024-12-11 15:01:58.277364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:8887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.758 [2024-12-11 15:01:58.277395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.758 [2024-12-11 15:01:58.291437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:15.758 [2024-12-11 15:01:58.291492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:20247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.758 [2024-12-11 15:01:58.291510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.758 [2024-12-11 15:01:58.306046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:15.758 [2024-12-11 15:01:58.306074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.758 [2024-12-11 15:01:58.306106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.758 [2024-12-11 15:01:58.317161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:15.758 [2024-12-11 15:01:58.317189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:9451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.758 [2024-12-11 15:01:58.317221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.758 [2024-12-11 15:01:58.331775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:15.758 [2024-12-11 15:01:58.331807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:11318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.758 [2024-12-11 15:01:58.331825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.758 [2024-12-11 15:01:58.347258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:15.758 [2024-12-11 15:01:58.347286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:19519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.759 [2024-12-11 15:01:58.347316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.759 [2024-12-11 15:01:58.361742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:15.759 [2024-12-11 15:01:58.361772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:9259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.759 [2024-12-11 15:01:58.361806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.759 18284.00 IOPS, 71.42 MiB/s [2024-12-11T14:01:58.532Z] [2024-12-11 15:01:58.375632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13ce390) 00:25:15.759 [2024-12-11 15:01:58.375660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:13758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.759 [2024-12-11 15:01:58.375691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.759 00:25:15.759 Latency(us) 00:25:15.759 [2024-12-11T14:01:58.532Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:15.759 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:15.759 nvme0n1 : 2.01 18301.25 71.49 0.00 0.00 6981.31 3470.98 24758.04 00:25:15.759 [2024-12-11T14:01:58.532Z] =================================================================================================================== 00:25:15.759 [2024-12-11T14:01:58.532Z] Total : 18301.25 71.49 0.00 0.00 6981.31 3470.98 24758.04 00:25:15.759 { 00:25:15.759 "results": [ 00:25:15.759 { 00:25:15.759 "job": "nvme0n1", 00:25:15.759 "core_mask": "0x2", 00:25:15.759 "workload": "randread", 00:25:15.759 "status": "finished", 00:25:15.759 "queue_depth": 128, 00:25:15.759 "io_size": 4096, 00:25:15.759 "runtime": 2.008606, 00:25:15.759 "iops": 18301.24972244432, 00:25:15.759 "mibps": 71.48925672829813, 00:25:15.759 "io_failed": 0, 00:25:15.759 "io_timeout": 0, 00:25:15.759 "avg_latency_us": 6981.307380486036, 00:25:15.759 "min_latency_us": 3470.9807407407407, 00:25:15.759 "max_latency_us": 24758.044444444444 00:25:15.759 } 00:25:15.759 ], 00:25:15.759 "core_count": 1 00:25:15.759 } 00:25:15.759 15:01:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:15.759 15:01:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:15.759 15:01:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:15.759 15:01:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:15.759 | .driver_specific 00:25:15.759 | .nvme_error 00:25:15.759 | .status_code 00:25:15.759 | .command_transient_transport_error' 00:25:16.017 15:01:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 144 > 0 )) 00:25:16.017 15:01:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 775422 00:25:16.017 15:01:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 775422 ']' 00:25:16.017 15:01:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 775422 00:25:16.017 15:01:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:25:16.017 15:01:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:16.017 15:01:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 775422 00:25:16.017 15:01:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:16.017 15:01:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:16.017 15:01:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 775422' 00:25:16.017 killing process with pid 775422 00:25:16.017 15:01:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 775422 00:25:16.017 Received shutdown signal, test time was about 2.000000 seconds 00:25:16.017 00:25:16.017 Latency(us) 00:25:16.017 [2024-12-11T14:01:58.790Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:16.017 [2024-12-11T14:01:58.790Z] =================================================================================================================== 00:25:16.017 [2024-12-11T14:01:58.790Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:16.017 15:01:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 775422 00:25:16.276 15:01:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:25:16.276 15:01:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:16.276 15:01:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:25:16.276 15:01:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:25:16.276 15:01:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:25:16.276 15:01:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=775832 00:25:16.276 15:01:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:25:16.276 15:01:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 775832 /var/tmp/bperf.sock 00:25:16.276 15:01:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 775832 ']' 00:25:16.276 15:01:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:16.276 15:01:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:16.276 15:01:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:16.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:16.276 15:01:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:16.276 15:01:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:16.276 [2024-12-11 15:01:58.962402] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:25:16.276 [2024-12-11 15:01:58.962492] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid775832 ] 00:25:16.276 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:16.276 Zero copy mechanism will not be used. 00:25:16.276 [2024-12-11 15:01:59.034100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:16.534 [2024-12-11 15:01:59.094297] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:25:16.534 15:01:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:16.534 15:01:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:25:16.534 15:01:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:16.534 15:01:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:16.791 15:01:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:16.791 15:01:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.791 15:01:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:16.791 15:01:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.791 15:01:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:16.791 15:01:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:17.049 nvme0n1 00:25:17.049 15:01:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:25:17.049 15:01:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.049 15:01:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:17.049 15:01:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.049 15:01:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:17.049 15:01:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:17.308 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:17.308 Zero copy mechanism will not be used. 00:25:17.308 Running I/O for 2 seconds... 00:25:17.308 [2024-12-11 15:01:59.922222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.308 [2024-12-11 15:01:59.922278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.308 [2024-12-11 15:01:59.922300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:17.308 [2024-12-11 15:01:59.927240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.308 [2024-12-11 15:01:59.927273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.308 [2024-12-11 15:01:59.927293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:17.308 [2024-12-11 15:01:59.931887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.308 [2024-12-11 15:01:59.931918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.308 [2024-12-11 15:01:59.931936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:17.308 [2024-12-11 15:01:59.936698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.308 [2024-12-11 15:01:59.936729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.308 [2024-12-11 15:01:59.936746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:17.308 [2024-12-11 15:01:59.941431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.308 [2024-12-11 15:01:59.941463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.308 [2024-12-11 15:01:59.941482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:17.308 [2024-12-11 15:01:59.946219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.308 [2024-12-11 15:01:59.946250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.308 [2024-12-11 15:01:59.946267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:17.308 [2024-12-11 15:01:59.950945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.308 [2024-12-11 15:01:59.950976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.308 [2024-12-11 15:01:59.950995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:17.308 [2024-12-11 15:01:59.955662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.308 [2024-12-11 15:01:59.955692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.308 [2024-12-11 15:01:59.955709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:17.308 [2024-12-11 15:01:59.960329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.308 [2024-12-11 15:01:59.960358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.308 [2024-12-11 15:01:59.960376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:17.308 [2024-12-11 15:01:59.965636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.308 [2024-12-11 15:01:59.965666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.308 [2024-12-11 15:01:59.965690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:17.308 [2024-12-11 15:01:59.970867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.308 [2024-12-11 15:01:59.970898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.308 [2024-12-11 15:01:59.970916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:17.308 [2024-12-11 15:01:59.975557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.308 [2024-12-11 15:01:59.975587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.308 [2024-12-11 15:01:59.975613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:17.308 [2024-12-11 15:01:59.980444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.308 [2024-12-11 15:01:59.980475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.308 [2024-12-11 15:01:59.980492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:17.308 [2024-12-11 15:01:59.985066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.308 [2024-12-11 15:01:59.985095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.308 [2024-12-11 15:01:59.985127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:17.308 [2024-12-11 15:01:59.987858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.308 [2024-12-11 15:01:59.987900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.308 [2024-12-11 15:01:59.987916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:17.308 [2024-12-11 15:01:59.992528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.308 [2024-12-11 15:01:59.992568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.308 [2024-12-11 15:01:59.992587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:17.308 [2024-12-11 15:01:59.998013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.308 [2024-12-11 15:01:59.998045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.308 [2024-12-11 15:01:59.998063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:17.308 [2024-12-11 15:02:00.004083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.308 [2024-12-11 15:02:00.004115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.308 [2024-12-11 15:02:00.004149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:17.308 [2024-12-11 15:02:00.009914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.308 [2024-12-11 15:02:00.009949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.308 [2024-12-11 15:02:00.009968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:17.308 [2024-12-11 15:02:00.015833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.308 [2024-12-11 15:02:00.015865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.308 [2024-12-11 15:02:00.015883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:17.308 [2024-12-11 15:02:00.021969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.308 [2024-12-11 15:02:00.022002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.308 [2024-12-11 15:02:00.022020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:17.308 [2024-12-11 15:02:00.028072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.308 [2024-12-11 15:02:00.028105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.309 [2024-12-11 15:02:00.028124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:17.309 [2024-12-11 15:02:00.034116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.309 [2024-12-11 15:02:00.034147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.309 [2024-12-11 15:02:00.034180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:17.309 [2024-12-11 15:02:00.040131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.309 [2024-12-11 15:02:00.040167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.309 [2024-12-11 15:02:00.040189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:17.309 [2024-12-11 15:02:00.046268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.309 [2024-12-11 15:02:00.046301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.309 [2024-12-11 15:02:00.046318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:17.309 [2024-12-11 15:02:00.052243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.309 [2024-12-11 15:02:00.052290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.309 [2024-12-11 15:02:00.052308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:17.309 [2024-12-11 15:02:00.058486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.309 [2024-12-11 15:02:00.058522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.309 [2024-12-11 15:02:00.058555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:17.309 [2024-12-11 15:02:00.062603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.309 [2024-12-11 15:02:00.062635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.309 [2024-12-11 15:02:00.062654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:17.309 [2024-12-11 15:02:00.068355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.309 [2024-12-11 15:02:00.068387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.309 [2024-12-11 15:02:00.068419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:17.309 [2024-12-11 15:02:00.075421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.309 [2024-12-11 15:02:00.075455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.309 [2024-12-11 15:02:00.075472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:17.567 [2024-12-11 15:02:00.081920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.568 [2024-12-11 15:02:00.081952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-12-11 15:02:00.081970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:17.568 [2024-12-11 15:02:00.087883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.568 [2024-12-11 15:02:00.087916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-12-11 15:02:00.087934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:17.568 [2024-12-11 15:02:00.093402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.568 [2024-12-11 15:02:00.093434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-12-11 15:02:00.093467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:17.568 [2024-12-11 15:02:00.099890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.568 [2024-12-11 15:02:00.099937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-12-11 15:02:00.099955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:17.568 [2024-12-11 15:02:00.105539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.568 [2024-12-11 15:02:00.105578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-12-11 15:02:00.105596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:17.568 [2024-12-11 15:02:00.110888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.568 [2024-12-11 15:02:00.110925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-12-11 15:02:00.110957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:17.568 [2024-12-11 15:02:00.115651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.568 [2024-12-11 15:02:00.115683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-12-11 15:02:00.115700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:17.568 [2024-12-11 15:02:00.120183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.568 [2024-12-11 15:02:00.120213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-12-11 15:02:00.120231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:17.568 [2024-12-11 15:02:00.124737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.568 [2024-12-11 15:02:00.124768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-12-11 15:02:00.124785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:17.568 [2024-12-11 15:02:00.129996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.568 [2024-12-11 15:02:00.130027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-12-11 15:02:00.130045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:17.568 [2024-12-11 15:02:00.136604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.568 [2024-12-11 15:02:00.136635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-12-11 15:02:00.136652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:17.568 [2024-12-11 15:02:00.144180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.568 [2024-12-11 15:02:00.144231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-12-11 15:02:00.144251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:17.568 [2024-12-11 15:02:00.150807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.568 [2024-12-11 15:02:00.150862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-12-11 15:02:00.150880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:17.568 [2024-12-11 15:02:00.158709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.568 [2024-12-11 15:02:00.158742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-12-11 15:02:00.158759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:17.568 [2024-12-11 15:02:00.166758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.568 [2024-12-11 15:02:00.166790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-12-11 15:02:00.166808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:17.568 [2024-12-11 15:02:00.174866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.568 [2024-12-11 15:02:00.174898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-12-11 15:02:00.174930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:17.568 [2024-12-11 15:02:00.182702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.568 [2024-12-11 15:02:00.182740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-12-11 15:02:00.182759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:17.568 [2024-12-11 15:02:00.190638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.568 [2024-12-11 15:02:00.190672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-12-11 15:02:00.190690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:17.568 [2024-12-11 15:02:00.197844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.568 [2024-12-11 15:02:00.197877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-12-11 15:02:00.197895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:17.568 [2024-12-11 15:02:00.205798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.568 [2024-12-11 15:02:00.205855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-12-11 15:02:00.205872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:17.568 [2024-12-11 15:02:00.214104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.568 [2024-12-11 15:02:00.214155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-12-11 15:02:00.214173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:17.568 [2024-12-11 15:02:00.221830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.568 [2024-12-11 15:02:00.221878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-12-11 15:02:00.221895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:17.568 [2024-12-11 15:02:00.229570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.568 [2024-12-11 15:02:00.229602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-12-11 15:02:00.229632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:17.568 [2024-12-11 15:02:00.237200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.568 [2024-12-11 15:02:00.237246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-12-11 15:02:00.237264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:17.568 [2024-12-11 15:02:00.244759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.568 [2024-12-11 15:02:00.244790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-12-11 15:02:00.244808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:17.568 [2024-12-11 15:02:00.252382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.568 [2024-12-11 15:02:00.252414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-12-11 15:02:00.252431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:17.568 [2024-12-11 15:02:00.259946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.568 [2024-12-11 15:02:00.259977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-12-11 15:02:00.259994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:17.568 [2024-12-11 15:02:00.267289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.568 [2024-12-11 15:02:00.267321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.569 [2024-12-11 15:02:00.267338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:17.569 [2024-12-11 15:02:00.273214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.569 [2024-12-11 15:02:00.273246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.569 [2024-12-11 15:02:00.273264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:17.569 [2024-12-11 15:02:00.279382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.569 [2024-12-11 15:02:00.279414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.569 [2024-12-11 15:02:00.279431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:17.569 [2024-12-11 15:02:00.286326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.569 [2024-12-11 15:02:00.286373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.569 [2024-12-11 15:02:00.286391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:17.569 [2024-12-11 15:02:00.293219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.569 [2024-12-11 15:02:00.293272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.569 [2024-12-11 15:02:00.293290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:17.569 [2024-12-11 15:02:00.301343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.569 [2024-12-11 15:02:00.301377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.569 [2024-12-11 15:02:00.301395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:17.569 [2024-12-11 15:02:00.305517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.569 [2024-12-11 15:02:00.305558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.569 [2024-12-11 15:02:00.305578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:17.569 [2024-12-11 15:02:00.310656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.569 [2024-12-11 15:02:00.310688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.569 [2024-12-11 15:02:00.310705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:17.569 [2024-12-11 15:02:00.316362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.569 [2024-12-11 15:02:00.316408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.569 [2024-12-11 15:02:00.316427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:17.569 [2024-12-11 15:02:00.321684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.569 [2024-12-11 15:02:00.321715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.569 [2024-12-11 15:02:00.321732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:17.569 [2024-12-11 15:02:00.327007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.569 [2024-12-11 15:02:00.327037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.569 [2024-12-11 15:02:00.327069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:17.569 [2024-12-11 15:02:00.332310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.569 [2024-12-11 15:02:00.332355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.569 [2024-12-11 15:02:00.332371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:17.569 [2024-12-11 15:02:00.337637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.569 [2024-12-11 15:02:00.337668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.569 [2024-12-11 15:02:00.337701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:17.828 [2024-12-11 15:02:00.344166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.828 [2024-12-11 15:02:00.344198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.828 [2024-12-11 15:02:00.344215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:17.828 [2024-12-11 15:02:00.350664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.828 [2024-12-11 15:02:00.350696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.828 [2024-12-11 15:02:00.350713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:17.828 [2024-12-11 15:02:00.356586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.828 [2024-12-11 15:02:00.356618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.828 [2024-12-11 15:02:00.356636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:17.828 [2024-12-11 15:02:00.362928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.828 [2024-12-11 15:02:00.362978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.828 [2024-12-11 15:02:00.362995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:17.828 [2024-12-11 15:02:00.369346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.828 [2024-12-11 15:02:00.369378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.828 [2024-12-11 15:02:00.369396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:17.828 [2024-12-11 15:02:00.375396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.828 [2024-12-11 15:02:00.375429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.828 [2024-12-11 15:02:00.375446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:17.828 [2024-12-11 15:02:00.382340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.828 [2024-12-11 15:02:00.382388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.828 [2024-12-11 15:02:00.382406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:17.828 [2024-12-11 15:02:00.389519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.828 [2024-12-11 15:02:00.389560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.828 [2024-12-11 15:02:00.389595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:17.828 [2024-12-11 15:02:00.396245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.828 [2024-12-11 15:02:00.396277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.828 [2024-12-11 15:02:00.396301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:17.828 [2024-12-11 15:02:00.401923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.828 [2024-12-11 15:02:00.401955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.828 [2024-12-11 15:02:00.401974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:17.828 [2024-12-11 15:02:00.407567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.828 [2024-12-11 15:02:00.407600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.828 [2024-12-11 15:02:00.407618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:17.828 [2024-12-11 15:02:00.412669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.828 [2024-12-11 15:02:00.412719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.828 [2024-12-11 15:02:00.412738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:17.828 [2024-12-11 15:02:00.418330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.828 [2024-12-11 15:02:00.418361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.828 [2024-12-11 15:02:00.418379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:17.828 [2024-12-11 15:02:00.424214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.828 [2024-12-11 15:02:00.424246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.828 [2024-12-11 15:02:00.424264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:17.828 [2024-12-11 15:02:00.431198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.828 [2024-12-11 15:02:00.431230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.828 [2024-12-11 15:02:00.431247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:17.828 [2024-12-11 15:02:00.437967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.828 [2024-12-11 15:02:00.438000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.828 [2024-12-11 15:02:00.438018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:17.828 [2024-12-11 15:02:00.443306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.828 [2024-12-11 15:02:00.443337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.828 [2024-12-11 15:02:00.443355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:17.828 [2024-12-11 15:02:00.448846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.828 [2024-12-11 15:02:00.448885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.828 [2024-12-11 15:02:00.448903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:17.828 [2024-12-11 15:02:00.453959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.828 [2024-12-11 15:02:00.453991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.828 [2024-12-11 15:02:00.454008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:17.828 [2024-12-11 15:02:00.458681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.828 [2024-12-11 15:02:00.458712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.828 [2024-12-11 15:02:00.458729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:17.828 [2024-12-11 15:02:00.463287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.828 [2024-12-11 15:02:00.463316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.828 [2024-12-11 15:02:00.463334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:17.828 [2024-12-11 15:02:00.467791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.828 [2024-12-11 15:02:00.467821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.828 [2024-12-11 15:02:00.467838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:17.828 [2024-12-11 15:02:00.472342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.828 [2024-12-11 15:02:00.472372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.828 [2024-12-11 15:02:00.472389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:17.828 [2024-12-11 15:02:00.476815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.828 [2024-12-11 15:02:00.476845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.828 [2024-12-11 15:02:00.476861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:17.828 [2024-12-11 15:02:00.481424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.828 [2024-12-11 15:02:00.481453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.829 [2024-12-11 15:02:00.481471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:17.829 [2024-12-11 15:02:00.485956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.829 [2024-12-11 15:02:00.485986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.829 [2024-12-11 15:02:00.486004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:17.829 [2024-12-11 15:02:00.490611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.829 [2024-12-11 15:02:00.490641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.829 [2024-12-11 15:02:00.490658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:17.829 [2024-12-11 15:02:00.495238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.829 [2024-12-11 15:02:00.495268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.829 [2024-12-11 15:02:00.495285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:17.829 [2024-12-11 15:02:00.499701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.829 [2024-12-11 15:02:00.499731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.829 [2024-12-11 15:02:00.499748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:17.829 [2024-12-11 15:02:00.504356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.829 [2024-12-11 15:02:00.504385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.829 [2024-12-11 15:02:00.504402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:17.829 [2024-12-11 15:02:00.508885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.829 [2024-12-11 15:02:00.508915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.829 [2024-12-11 15:02:00.508931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:17.829 [2024-12-11 15:02:00.513506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.829 [2024-12-11 15:02:00.513535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.829 [2024-12-11 15:02:00.513559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:17.829 [2024-12-11 15:02:00.518111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.829 [2024-12-11 15:02:00.518140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.829 [2024-12-11 15:02:00.518157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:17.829 [2024-12-11 15:02:00.523176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.829 [2024-12-11 15:02:00.523206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.829 [2024-12-11 15:02:00.523223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:17.829 [2024-12-11 15:02:00.528353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.829 [2024-12-11 15:02:00.528384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.829 [2024-12-11 15:02:00.528408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:17.829 [2024-12-11 15:02:00.533324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.829 [2024-12-11 15:02:00.533355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.829 [2024-12-11 15:02:00.533372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:17.829 [2024-12-11 15:02:00.538805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.829 [2024-12-11 15:02:00.538836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.829 [2024-12-11 15:02:00.538853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:17.829 [2024-12-11 15:02:00.544787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.829 [2024-12-11 15:02:00.544818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.829 [2024-12-11 15:02:00.544835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:17.829 [2024-12-11 15:02:00.552404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.829 [2024-12-11 15:02:00.552436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.829 [2024-12-11 15:02:00.552453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:17.829 [2024-12-11 15:02:00.558541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.829 [2024-12-11 15:02:00.558579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.829 [2024-12-11 15:02:00.558598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:17.829 [2024-12-11 15:02:00.564257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.829 [2024-12-11 15:02:00.564288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.829 [2024-12-11 15:02:00.564305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:17.829 [2024-12-11 15:02:00.569447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.829 [2024-12-11 15:02:00.569478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.829 [2024-12-11 15:02:00.569495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:17.829 [2024-12-11 15:02:00.574048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.829 [2024-12-11 15:02:00.574078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.829 [2024-12-11 15:02:00.574095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:17.829 [2024-12-11 15:02:00.578692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.829 [2024-12-11 15:02:00.578722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.829 [2024-12-11 15:02:00.578739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:17.829 [2024-12-11 15:02:00.583312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.829 [2024-12-11 15:02:00.583341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.829 [2024-12-11 15:02:00.583359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:17.829 [2024-12-11 15:02:00.588299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.829 [2024-12-11 15:02:00.588330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.829 [2024-12-11 15:02:00.588347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:17.829 [2024-12-11 15:02:00.593476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.829 [2024-12-11 15:02:00.593508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.829 [2024-12-11 15:02:00.593526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:17.829 [2024-12-11 15:02:00.598028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:17.829 [2024-12-11 15:02:00.598059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.829 [2024-12-11 15:02:00.598076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:18.089 [2024-12-11 15:02:00.602563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.089 [2024-12-11 15:02:00.602593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.089 [2024-12-11 15:02:00.602610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:18.089 [2024-12-11 15:02:00.607102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.089 [2024-12-11 15:02:00.607132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.089 [2024-12-11 15:02:00.607149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:18.089 [2024-12-11 15:02:00.611568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.089 [2024-12-11 15:02:00.611598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.089 [2024-12-11 15:02:00.611614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:18.089 [2024-12-11 15:02:00.616024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.089 [2024-12-11 15:02:00.616053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.089 [2024-12-11 15:02:00.616077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:18.089 [2024-12-11 15:02:00.620581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.089 [2024-12-11 15:02:00.620625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.089 [2024-12-11 15:02:00.620643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:18.089 [2024-12-11 15:02:00.625087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.089 [2024-12-11 15:02:00.625117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.089 [2024-12-11 15:02:00.625135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:18.089 [2024-12-11 15:02:00.629574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.089 [2024-12-11 15:02:00.629603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.089 [2024-12-11 15:02:00.629620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:18.089 [2024-12-11 15:02:00.634166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.089 [2024-12-11 15:02:00.634195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.089 [2024-12-11 15:02:00.634212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:18.089 [2024-12-11 15:02:00.638695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.089 [2024-12-11 15:02:00.638724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.089 [2024-12-11 15:02:00.638740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:18.089 [2024-12-11 15:02:00.643196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.089 [2024-12-11 15:02:00.643224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.089 [2024-12-11 15:02:00.643241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:18.089 [2024-12-11 15:02:00.647655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.089 [2024-12-11 15:02:00.647684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.089 [2024-12-11 15:02:00.647702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:18.089 [2024-12-11 15:02:00.652184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.089 [2024-12-11 15:02:00.652213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.089 [2024-12-11 15:02:00.652229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:18.089 [2024-12-11 15:02:00.656677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.089 [2024-12-11 15:02:00.656717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.089 [2024-12-11 15:02:00.656735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:18.089 [2024-12-11 15:02:00.661186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.089 [2024-12-11 15:02:00.661216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.089 [2024-12-11 15:02:00.661233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:18.089 [2024-12-11 15:02:00.665755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.089 [2024-12-11 15:02:00.665785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.089 [2024-12-11 15:02:00.665802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:18.089 [2024-12-11 15:02:00.670378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.089 [2024-12-11 15:02:00.670408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.089 [2024-12-11 15:02:00.670425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:18.089 [2024-12-11 15:02:00.674915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.089 [2024-12-11 15:02:00.674945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.089 [2024-12-11 15:02:00.674962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:18.089 [2024-12-11 15:02:00.679563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.089 [2024-12-11 15:02:00.679592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.089 [2024-12-11 15:02:00.679609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:18.089 [2024-12-11 15:02:00.683976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.089 [2024-12-11 15:02:00.684005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.089 [2024-12-11 15:02:00.684022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:18.089 [2024-12-11 15:02:00.688452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.089 [2024-12-11 15:02:00.688484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.089 [2024-12-11 15:02:00.688501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:18.089 [2024-12-11 15:02:00.693008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.089 [2024-12-11 15:02:00.693040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.089 [2024-12-11 15:02:00.693057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:18.089 [2024-12-11 15:02:00.697594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.089 [2024-12-11 15:02:00.697625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.089 [2024-12-11 15:02:00.697642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:18.089 [2024-12-11 15:02:00.702793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.089 [2024-12-11 15:02:00.702823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.089 [2024-12-11 15:02:00.702841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:18.089 [2024-12-11 15:02:00.708256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.090 [2024-12-11 15:02:00.708287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.090 [2024-12-11 15:02:00.708304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:18.090 [2024-12-11 15:02:00.713832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.090 [2024-12-11 15:02:00.713863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.090 [2024-12-11 15:02:00.713882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:18.090 [2024-12-11 15:02:00.719368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.090 [2024-12-11 15:02:00.719399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.090 [2024-12-11 15:02:00.719417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:18.090 [2024-12-11 15:02:00.725108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.090 [2024-12-11 15:02:00.725139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.090 [2024-12-11 15:02:00.725156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:18.090 [2024-12-11 15:02:00.730795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.090 [2024-12-11 15:02:00.730826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.090 [2024-12-11 15:02:00.730844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:18.090 [2024-12-11 15:02:00.735128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.090 [2024-12-11 15:02:00.735160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.090 [2024-12-11 15:02:00.735177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:18.090 [2024-12-11 15:02:00.740253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.090 [2024-12-11 15:02:00.740285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.090 [2024-12-11 15:02:00.740309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:18.090 [2024-12-11 15:02:00.746190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.090 [2024-12-11 15:02:00.746235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.090 [2024-12-11 15:02:00.746253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:18.090 [2024-12-11 15:02:00.753130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.090 [2024-12-11 15:02:00.753162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.090 [2024-12-11 15:02:00.753180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:18.090 [2024-12-11 15:02:00.760230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.090 [2024-12-11 15:02:00.760277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.090 [2024-12-11 15:02:00.760294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:18.090 [2024-12-11 15:02:00.767457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.090 [2024-12-11 15:02:00.767503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.090 [2024-12-11 15:02:00.767520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:18.090 [2024-12-11 15:02:00.773851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.090 [2024-12-11 15:02:00.773884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.090 [2024-12-11 15:02:00.773901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:18.090 [2024-12-11 15:02:00.779923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.090 [2024-12-11 15:02:00.779956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.090 [2024-12-11 15:02:00.779974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:18.090 [2024-12-11 15:02:00.785999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.090 [2024-12-11 15:02:00.786031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.090 [2024-12-11 15:02:00.786050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:18.090 [2024-12-11 15:02:00.791834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.090 [2024-12-11 15:02:00.791867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.090 [2024-12-11 15:02:00.791884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:18.090 [2024-12-11 15:02:00.797572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.090 [2024-12-11 15:02:00.797611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.090 [2024-12-11 15:02:00.797630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:18.090 [2024-12-11 15:02:00.803329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.090 [2024-12-11 15:02:00.803361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.090 [2024-12-11 15:02:00.803379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:18.090 [2024-12-11 15:02:00.809187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.090 [2024-12-11 15:02:00.809218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.090 [2024-12-11 15:02:00.809236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:18.090 [2024-12-11 15:02:00.815778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.090 [2024-12-11 15:02:00.815811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.090 [2024-12-11 15:02:00.815828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:18.090 [2024-12-11 15:02:00.822569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.090 [2024-12-11 15:02:00.822607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.090 [2024-12-11 15:02:00.822625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:18.090 [2024-12-11 15:02:00.828524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.090 [2024-12-11 15:02:00.828568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.090 [2024-12-11 15:02:00.828607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:18.090 [2024-12-11 15:02:00.834816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.090 [2024-12-11 15:02:00.834863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.090 [2024-12-11 15:02:00.834880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:18.090 [2024-12-11 15:02:00.841303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.091 [2024-12-11 15:02:00.841350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.091 [2024-12-11 15:02:00.841367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:18.091 [2024-12-11 15:02:00.848912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.091 [2024-12-11 15:02:00.848945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.091 [2024-12-11 15:02:00.848977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:18.091 [2024-12-11 15:02:00.856599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.091 [2024-12-11 15:02:00.856645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.091 [2024-12-11 15:02:00.856663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:18.350 [2024-12-11 15:02:00.863528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.350 [2024-12-11 15:02:00.863568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.350 [2024-12-11 15:02:00.863586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:18.350 [2024-12-11 15:02:00.870078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.350 [2024-12-11 15:02:00.870109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.350 [2024-12-11 15:02:00.870141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:18.350 [2024-12-11 15:02:00.875639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.350 [2024-12-11 15:02:00.875670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.350 [2024-12-11 15:02:00.875688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:18.350 [2024-12-11 15:02:00.880231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.350 [2024-12-11 15:02:00.880261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.350 [2024-12-11 15:02:00.880278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:18.350 [2024-12-11 15:02:00.884802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.350 [2024-12-11 15:02:00.884833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.350 [2024-12-11 15:02:00.884851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:18.350 [2024-12-11 15:02:00.889533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.350 [2024-12-11 15:02:00.889572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.350 [2024-12-11 15:02:00.889597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:18.350 [2024-12-11 15:02:00.895063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.350 [2024-12-11 15:02:00.895095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.350 [2024-12-11 15:02:00.895112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:18.350 [2024-12-11 15:02:00.902598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.350 [2024-12-11 15:02:00.902628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.350 [2024-12-11 15:02:00.902654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:18.350 [2024-12-11 15:02:00.909055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.350 [2024-12-11 15:02:00.909087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.350 [2024-12-11 15:02:00.909105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:18.350 [2024-12-11 15:02:00.914689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.350 [2024-12-11 15:02:00.914719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.350 [2024-12-11 15:02:00.914737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:18.350 [2024-12-11 15:02:00.920297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.350 [2024-12-11 15:02:00.920329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.350 [2024-12-11 15:02:00.920346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:18.350 5459.00 IOPS, 682.38 MiB/s [2024-12-11T14:02:01.123Z] [2024-12-11 15:02:00.927071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.350 [2024-12-11 15:02:00.927116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.350 [2024-12-11 15:02:00.927135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:18.350 [2024-12-11 15:02:00.933889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.350 [2024-12-11 15:02:00.933920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.350 [2024-12-11 15:02:00.933938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:18.351 [2024-12-11 15:02:00.940869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.351 [2024-12-11 15:02:00.940901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.351 [2024-12-11 15:02:00.940919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:18.351 [2024-12-11 15:02:00.946995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.351 [2024-12-11 15:02:00.947028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.351 [2024-12-11 15:02:00.947046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:18.351 [2024-12-11 15:02:00.953468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.351 [2024-12-11 15:02:00.953501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.351 [2024-12-11 15:02:00.953518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:18.351 [2024-12-11 15:02:00.959005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.351 [2024-12-11 15:02:00.959038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.351 [2024-12-11 15:02:00.959056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:18.351 [2024-12-11 15:02:00.964235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.351 [2024-12-11 15:02:00.964266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.351 [2024-12-11 15:02:00.964285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:18.351 [2024-12-11 15:02:00.970205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.351 [2024-12-11 15:02:00.970236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.351 [2024-12-11 15:02:00.970255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:18.351 [2024-12-11 15:02:00.977750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.351 [2024-12-11 15:02:00.977782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.351 [2024-12-11 15:02:00.977799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:18.351 [2024-12-11 15:02:00.985243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.351 [2024-12-11 15:02:00.985275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.351 [2024-12-11 15:02:00.985308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:18.351 [2024-12-11 15:02:00.991639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.351 [2024-12-11 15:02:00.991680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.351 [2024-12-11 15:02:00.991697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:18.351 [2024-12-11 15:02:00.997616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.351 [2024-12-11 15:02:00.997647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.351 [2024-12-11 15:02:00.997665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:18.351 [2024-12-11 15:02:01.002954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.351 [2024-12-11 15:02:01.002985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.351 [2024-12-11 15:02:01.003003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:18.351 [2024-12-11 15:02:01.008260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.351 [2024-12-11 15:02:01.008306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.351 [2024-12-11 15:02:01.008324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:18.351 [2024-12-11 15:02:01.013630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.351 [2024-12-11 15:02:01.013663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.351 [2024-12-11 15:02:01.013681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:18.351 [2024-12-11 15:02:01.019119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.351 [2024-12-11 15:02:01.019151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.351 [2024-12-11 15:02:01.019168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:18.351 [2024-12-11 15:02:01.024931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.351 [2024-12-11 15:02:01.024963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.351 [2024-12-11 15:02:01.024981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:18.351 [2024-12-11 15:02:01.030913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.351 [2024-12-11 15:02:01.030944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.351 [2024-12-11 15:02:01.030962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:18.351 [2024-12-11 15:02:01.037439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.351 [2024-12-11 15:02:01.037471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.351 [2024-12-11 15:02:01.037489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:18.351 [2024-12-11 15:02:01.043900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.351 [2024-12-11 15:02:01.043931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.351 [2024-12-11 15:02:01.043949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:18.351 [2024-12-11 15:02:01.049236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.351 [2024-12-11 15:02:01.049268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.351 [2024-12-11 15:02:01.049285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:18.351 [2024-12-11 15:02:01.054052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.351 [2024-12-11 15:02:01.054083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.351 [2024-12-11 15:02:01.054100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:18.351 [2024-12-11 15:02:01.059329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.351 [2024-12-11 15:02:01.059368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.351 [2024-12-11 15:02:01.059386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:18.351 [2024-12-11 15:02:01.064460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.351 [2024-12-11 15:02:01.064491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.351 [2024-12-11 15:02:01.064508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:18.351 [2024-12-11 15:02:01.069138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.351 [2024-12-11 15:02:01.069168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.351 [2024-12-11 15:02:01.069185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:18.351 [2024-12-11 15:02:01.073623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.351 [2024-12-11 15:02:01.073653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.351 [2024-12-11 15:02:01.073670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:18.351 [2024-12-11 15:02:01.078130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.351 [2024-12-11 15:02:01.078160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.351 [2024-12-11 15:02:01.078177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:18.351 [2024-12-11 15:02:01.082689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.351 [2024-12-11 15:02:01.082719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.351 [2024-12-11 15:02:01.082737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:18.351 [2024-12-11 15:02:01.087228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.352 [2024-12-11 15:02:01.087258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.352 [2024-12-11 15:02:01.087275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:18.352 [2024-12-11 15:02:01.091795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.352 [2024-12-11 15:02:01.091825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.352 [2024-12-11 15:02:01.091842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:18.352 [2024-12-11 15:02:01.097029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.352 [2024-12-11 15:02:01.097061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.352 [2024-12-11 15:02:01.097079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:18.352 [2024-12-11 15:02:01.102002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.352 [2024-12-11 15:02:01.102033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.352 [2024-12-11 15:02:01.102051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:18.352 [2024-12-11 15:02:01.106478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.352 [2024-12-11 15:02:01.106508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.352 [2024-12-11 15:02:01.106525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:18.352 [2024-12-11 15:02:01.111026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.352 [2024-12-11 15:02:01.111057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.352 [2024-12-11 15:02:01.111074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:18.352 [2024-12-11 15:02:01.115638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.352 [2024-12-11 15:02:01.115668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.352 [2024-12-11 15:02:01.115685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:18.352 [2024-12-11 15:02:01.120309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.352 [2024-12-11 15:02:01.120340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.352 [2024-12-11 15:02:01.120357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:18.610 [2024-12-11 15:02:01.124966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.610 [2024-12-11 15:02:01.124996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.610 [2024-12-11 15:02:01.125013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:18.610 [2024-12-11 15:02:01.129486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.610 [2024-12-11 15:02:01.129516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.610 [2024-12-11 15:02:01.129532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:18.610 [2024-12-11 15:02:01.133980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.610 [2024-12-11 15:02:01.134010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.610 [2024-12-11 15:02:01.134027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:18.610 [2024-12-11 15:02:01.138531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.610 [2024-12-11 15:02:01.138569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.610 [2024-12-11 15:02:01.138595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:18.610 [2024-12-11 15:02:01.143181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.610 [2024-12-11 15:02:01.143210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.610 [2024-12-11 15:02:01.143228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:18.610 [2024-12-11 15:02:01.147925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.610 [2024-12-11 15:02:01.147956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.610 [2024-12-11 15:02:01.147973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:18.610 [2024-12-11 15:02:01.153225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.610 [2024-12-11 15:02:01.153257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.610 [2024-12-11 15:02:01.153275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:18.610 [2024-12-11 15:02:01.157975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.610 [2024-12-11 15:02:01.158006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.610 [2024-12-11 15:02:01.158023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:18.611 [2024-12-11 15:02:01.162591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.611 [2024-12-11 15:02:01.162621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.611 [2024-12-11 15:02:01.162638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:18.611 [2024-12-11 15:02:01.167272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.611 [2024-12-11 15:02:01.167302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.611 [2024-12-11 15:02:01.167319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:18.611 [2024-12-11 15:02:01.171941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.611 [2024-12-11 15:02:01.171972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.611 [2024-12-11 15:02:01.171990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:18.611 [2024-12-11 15:02:01.177523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.611 [2024-12-11 15:02:01.177563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.611 [2024-12-11 15:02:01.177582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:18.611 [2024-12-11 15:02:01.182092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.611 [2024-12-11 15:02:01.182130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.611 [2024-12-11 15:02:01.182148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:18.611 [2024-12-11 15:02:01.186608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.611 [2024-12-11 15:02:01.186638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.611 [2024-12-11 15:02:01.186655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:18.611 [2024-12-11 15:02:01.191068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.611 [2024-12-11 15:02:01.191098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.611 [2024-12-11 15:02:01.191115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:18.611 [2024-12-11 15:02:01.195516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.611 [2024-12-11 15:02:01.195554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.611 [2024-12-11 15:02:01.195574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:18.611 [2024-12-11 15:02:01.199983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.611 [2024-12-11 15:02:01.200015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.611 [2024-12-11 15:02:01.200032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:18.611 [2024-12-11 15:02:01.204542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.611 [2024-12-11 15:02:01.204579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.611 [2024-12-11 15:02:01.204596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:18.611 [2024-12-11 15:02:01.209141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.611 [2024-12-11 15:02:01.209171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.611 [2024-12-11 15:02:01.209187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:18.611 [2024-12-11 15:02:01.213598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.611 [2024-12-11 15:02:01.213627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.611 [2024-12-11 15:02:01.213644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:18.611 [2024-12-11 15:02:01.218000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.611 [2024-12-11 15:02:01.218030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.611 [2024-12-11 15:02:01.218047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:18.611 [2024-12-11 15:02:01.222542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.611 [2024-12-11 15:02:01.222580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.611 [2024-12-11 15:02:01.222597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:18.611 [2024-12-11 15:02:01.226933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.611 [2024-12-11 15:02:01.226962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.611 [2024-12-11 15:02:01.226979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:18.611 [2024-12-11 15:02:01.231358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.611 [2024-12-11 15:02:01.231387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.611 [2024-12-11 15:02:01.231404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:18.611 [2024-12-11 15:02:01.235730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.611 [2024-12-11 15:02:01.235760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.611 [2024-12-11 15:02:01.235777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:18.611 [2024-12-11 15:02:01.240123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.611 [2024-12-11 15:02:01.240153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.611 [2024-12-11 15:02:01.240170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:18.611 [2024-12-11 15:02:01.244643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.611 [2024-12-11 15:02:01.244673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.611 [2024-12-11 15:02:01.244690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:18.611 [2024-12-11 15:02:01.249170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.611 [2024-12-11 15:02:01.249199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.611 [2024-12-11 15:02:01.249216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:18.611 [2024-12-11 15:02:01.254159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.611 [2024-12-11 15:02:01.254191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.611 [2024-12-11 15:02:01.254209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:18.611 [2024-12-11 15:02:01.258740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.611 [2024-12-11 15:02:01.258771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.611 [2024-12-11 15:02:01.258796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:18.611 [2024-12-11 15:02:01.263872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.611 [2024-12-11 15:02:01.263914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.611 [2024-12-11 15:02:01.263931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:18.611 [2024-12-11 15:02:01.269228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.611 [2024-12-11 15:02:01.269260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.611 [2024-12-11 15:02:01.269288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:18.611 [2024-12-11 15:02:01.275168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.611 [2024-12-11 15:02:01.275200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.611 [2024-12-11 15:02:01.275217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:18.611 [2024-12-11 15:02:01.280371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.611 [2024-12-11 15:02:01.280402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.611 [2024-12-11 15:02:01.280419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:18.611 [2024-12-11 15:02:01.284187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.611 [2024-12-11 15:02:01.284218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.611 [2024-12-11 15:02:01.284236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:18.612 [2024-12-11 15:02:01.289288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.612 [2024-12-11 15:02:01.289333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.612 [2024-12-11 15:02:01.289351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:18.612 [2024-12-11 15:02:01.295951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.612 [2024-12-11 15:02:01.296008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.612 [2024-12-11 15:02:01.296025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:18.612 [2024-12-11 15:02:01.302467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.612 [2024-12-11 15:02:01.302497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.612 [2024-12-11 15:02:01.302529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:18.612 [2024-12-11 15:02:01.309586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.612 [2024-12-11 15:02:01.309643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.612 [2024-12-11 15:02:01.309662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:18.612 [2024-12-11 15:02:01.317733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.612 [2024-12-11 15:02:01.317764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.612 [2024-12-11 15:02:01.317782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:18.612 [2024-12-11 15:02:01.325622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.612 [2024-12-11 15:02:01.325654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.612 [2024-12-11 15:02:01.325672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:18.612 [2024-12-11 15:02:01.333383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.612 [2024-12-11 15:02:01.333430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.612 [2024-12-11 15:02:01.333448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:18.612 [2024-12-11 15:02:01.341084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.612 [2024-12-11 15:02:01.341115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.612 [2024-12-11 15:02:01.341147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:18.612 [2024-12-11 15:02:01.348808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.612 [2024-12-11 15:02:01.348841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.612 [2024-12-11 15:02:01.348859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:18.612 [2024-12-11 15:02:01.356647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.612 [2024-12-11 15:02:01.356679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.612 [2024-12-11 15:02:01.356697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:18.612 [2024-12-11 15:02:01.364263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.612 [2024-12-11 15:02:01.364295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.612 [2024-12-11 15:02:01.364313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:18.612 [2024-12-11 15:02:01.371962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.612 [2024-12-11 15:02:01.371994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.612 [2024-12-11 15:02:01.372012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:18.612 [2024-12-11 15:02:01.379604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.612 [2024-12-11 15:02:01.379642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.612 [2024-12-11 15:02:01.379659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:18.870 [2024-12-11 15:02:01.387330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.870 [2024-12-11 15:02:01.387360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.870 [2024-12-11 15:02:01.387378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:18.870 [2024-12-11 15:02:01.395019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.870 [2024-12-11 15:02:01.395050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.870 [2024-12-11 15:02:01.395081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:18.871 [2024-12-11 15:02:01.402774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.871 [2024-12-11 15:02:01.402805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.871 [2024-12-11 15:02:01.402823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:18.871 [2024-12-11 15:02:01.410434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.871 [2024-12-11 15:02:01.410465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.871 [2024-12-11 15:02:01.410483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:18.871 [2024-12-11 15:02:01.418084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.871 [2024-12-11 15:02:01.418116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.871 [2024-12-11 15:02:01.418134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:18.871 [2024-12-11 15:02:01.425387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.871 [2024-12-11 15:02:01.425435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.871 [2024-12-11 15:02:01.425453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:18.871 [2024-12-11 15:02:01.431953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.871 [2024-12-11 15:02:01.432000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.871 [2024-12-11 15:02:01.432018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:18.871 [2024-12-11 15:02:01.437755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.871 [2024-12-11 15:02:01.437786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.871 [2024-12-11 15:02:01.437811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:18.871 [2024-12-11 15:02:01.442428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.871 [2024-12-11 15:02:01.442459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.871 [2024-12-11 15:02:01.442476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:18.871 [2024-12-11 15:02:01.446935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.871 [2024-12-11 15:02:01.446967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.871 [2024-12-11 15:02:01.446985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:18.871 [2024-12-11 15:02:01.451465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.871 [2024-12-11 15:02:01.451496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.871 [2024-12-11 15:02:01.451514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:18.871 [2024-12-11 15:02:01.455973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.871 [2024-12-11 15:02:01.456002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.871 [2024-12-11 15:02:01.456019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:18.871 [2024-12-11 15:02:01.461270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.871 [2024-12-11 15:02:01.461300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.871 [2024-12-11 15:02:01.461332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:18.871 [2024-12-11 15:02:01.466647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.871 [2024-12-11 15:02:01.466677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.871 [2024-12-11 15:02:01.466695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:18.871 [2024-12-11 15:02:01.472570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.871 [2024-12-11 15:02:01.472601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.871 [2024-12-11 15:02:01.472632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:18.871 [2024-12-11 15:02:01.477625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.871 [2024-12-11 15:02:01.477655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.871 [2024-12-11 15:02:01.477687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:18.871 [2024-12-11 15:02:01.482462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.871 [2024-12-11 15:02:01.482511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.871 [2024-12-11 15:02:01.482528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:18.871 [2024-12-11 15:02:01.487206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.871 [2024-12-11 15:02:01.487250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.871 [2024-12-11 15:02:01.487267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:18.871 [2024-12-11 15:02:01.491880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.871 [2024-12-11 15:02:01.491910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.871 [2024-12-11 15:02:01.491944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:18.871 [2024-12-11 15:02:01.496494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.871 [2024-12-11 15:02:01.496523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.871 [2024-12-11 15:02:01.496539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:18.871 [2024-12-11 15:02:01.501196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.871 [2024-12-11 15:02:01.501227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.871 [2024-12-11 15:02:01.501245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:18.871 [2024-12-11 15:02:01.506199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.871 [2024-12-11 15:02:01.506230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.871 [2024-12-11 15:02:01.506248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:18.871 [2024-12-11 15:02:01.511739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.871 [2024-12-11 15:02:01.511770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.871 [2024-12-11 15:02:01.511788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:18.871 [2024-12-11 15:02:01.518751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.871 [2024-12-11 15:02:01.518782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.871 [2024-12-11 15:02:01.518799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:18.871 [2024-12-11 15:02:01.524260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.871 [2024-12-11 15:02:01.524292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.871 [2024-12-11 15:02:01.524310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:18.871 [2024-12-11 15:02:01.529514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.871 [2024-12-11 15:02:01.529552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.871 [2024-12-11 15:02:01.529571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:18.871 [2024-12-11 15:02:01.534587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.871 [2024-12-11 15:02:01.534618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.871 [2024-12-11 15:02:01.534635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:18.871 [2024-12-11 15:02:01.539504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.871 [2024-12-11 15:02:01.539535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.871 [2024-12-11 15:02:01.539561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:18.871 [2024-12-11 15:02:01.544691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.871 [2024-12-11 15:02:01.544723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.871 [2024-12-11 15:02:01.544741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:18.871 [2024-12-11 15:02:01.550392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.872 [2024-12-11 15:02:01.550424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.872 [2024-12-11 15:02:01.550442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:18.872 [2024-12-11 15:02:01.555855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.872 [2024-12-11 15:02:01.555887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.872 [2024-12-11 15:02:01.555905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:18.872 [2024-12-11 15:02:01.559617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.872 [2024-12-11 15:02:01.559648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.872 [2024-12-11 15:02:01.559666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:18.872 [2024-12-11 15:02:01.564961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.872 [2024-12-11 15:02:01.564992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.872 [2024-12-11 15:02:01.565025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:18.872 [2024-12-11 15:02:01.570904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.872 [2024-12-11 15:02:01.570948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.872 [2024-12-11 15:02:01.570970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:18.872 [2024-12-11 15:02:01.577064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.872 [2024-12-11 15:02:01.577096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.872 [2024-12-11 15:02:01.577132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:18.872 [2024-12-11 15:02:01.584723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.872 [2024-12-11 15:02:01.584753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.872 [2024-12-11 15:02:01.584770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:18.872 [2024-12-11 15:02:01.591808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.872 [2024-12-11 15:02:01.591854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.872 [2024-12-11 15:02:01.591870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:18.872 [2024-12-11 15:02:01.598221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.872 [2024-12-11 15:02:01.598267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.872 [2024-12-11 15:02:01.598284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:18.872 [2024-12-11 15:02:01.604186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.872 [2024-12-11 15:02:01.604232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.872 [2024-12-11 15:02:01.604248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:18.872 [2024-12-11 15:02:01.609570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.872 [2024-12-11 15:02:01.609615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.872 [2024-12-11 15:02:01.609632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:18.872 [2024-12-11 15:02:01.614646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.872 [2024-12-11 15:02:01.614677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.872 [2024-12-11 15:02:01.614695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:18.872 [2024-12-11 15:02:01.619675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.872 [2024-12-11 15:02:01.619706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.872 [2024-12-11 15:02:01.619724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:18.872 [2024-12-11 15:02:01.624975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.872 [2024-12-11 15:02:01.625022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.872 [2024-12-11 15:02:01.625039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:18.872 [2024-12-11 15:02:01.630133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.872 [2024-12-11 15:02:01.630164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.872 [2024-12-11 15:02:01.630196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:18.872 [2024-12-11 15:02:01.635464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.872 [2024-12-11 15:02:01.635494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.872 [2024-12-11 15:02:01.635527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:18.872 [2024-12-11 15:02:01.640680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:18.872 [2024-12-11 15:02:01.640711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:18.872 [2024-12-11 15:02:01.640729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:19.131 [2024-12-11 15:02:01.645515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:19.131 [2024-12-11 15:02:01.645554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.131 [2024-12-11 15:02:01.645574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:19.131 [2024-12-11 15:02:01.650457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:19.131 [2024-12-11 15:02:01.650488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.131 [2024-12-11 15:02:01.650505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:19.131 [2024-12-11 15:02:01.655522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:19.131 [2024-12-11 15:02:01.655559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.131 [2024-12-11 15:02:01.655579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:19.131 [2024-12-11 15:02:01.658329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:19.131 [2024-12-11 15:02:01.658358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.131 [2024-12-11 15:02:01.658390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:19.131 [2024-12-11 15:02:01.663495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:19.131 [2024-12-11 15:02:01.663540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.131 [2024-12-11 15:02:01.663574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:19.131 [2024-12-11 15:02:01.670016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:19.131 [2024-12-11 15:02:01.670046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.131 [2024-12-11 15:02:01.670063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:19.131 [2024-12-11 15:02:01.675884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:19.131 [2024-12-11 15:02:01.675916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.131 [2024-12-11 15:02:01.675933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:19.131 [2024-12-11 15:02:01.680744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:19.131 [2024-12-11 15:02:01.680776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.131 [2024-12-11 15:02:01.680793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:19.131 [2024-12-11 15:02:01.685418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:19.131 [2024-12-11 15:02:01.685449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.131 [2024-12-11 15:02:01.685466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:19.131 [2024-12-11 15:02:01.690300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:19.131 [2024-12-11 15:02:01.690330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.131 [2024-12-11 15:02:01.690363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:19.131 [2024-12-11 15:02:01.695873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:19.131 [2024-12-11 15:02:01.695905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.131 [2024-12-11 15:02:01.695922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:19.131 [2024-12-11 15:02:01.701656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:19.131 [2024-12-11 15:02:01.701688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.131 [2024-12-11 15:02:01.701706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:19.131 [2024-12-11 15:02:01.706973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:19.131 [2024-12-11 15:02:01.707005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.131 [2024-12-11 15:02:01.707022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:19.131 [2024-12-11 15:02:01.712304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:19.131 [2024-12-11 15:02:01.712358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.131 [2024-12-11 15:02:01.712376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:19.131 [2024-12-11 15:02:01.717204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:19.131 [2024-12-11 15:02:01.717235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.131 [2024-12-11 15:02:01.717253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:19.131 [2024-12-11 15:02:01.723013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:19.131 [2024-12-11 15:02:01.723045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.131 [2024-12-11 15:02:01.723063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:19.131 [2024-12-11 15:02:01.728150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:19.131 [2024-12-11 15:02:01.728181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.131 [2024-12-11 15:02:01.728199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:19.132 [2024-12-11 15:02:01.733424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:19.132 [2024-12-11 15:02:01.733455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.132 [2024-12-11 15:02:01.733472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:19.132 [2024-12-11 15:02:01.737145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:19.132 [2024-12-11 15:02:01.737176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.132 [2024-12-11 15:02:01.737194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:19.132 [2024-12-11 15:02:01.740963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:19.132 [2024-12-11 15:02:01.740993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.132 [2024-12-11 15:02:01.741010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:19.132 [2024-12-11 15:02:01.745468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:19.132 [2024-12-11 15:02:01.745514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.132 [2024-12-11 15:02:01.745531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:19.132 [2024-12-11 15:02:01.750636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:19.132 [2024-12-11 15:02:01.750680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.132 [2024-12-11 15:02:01.750696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:19.132 [2024-12-11 15:02:01.755949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:19.132 [2024-12-11 15:02:01.755978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.132 [2024-12-11 15:02:01.755994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:19.132 [2024-12-11 15:02:01.761184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:19.132 [2024-12-11 15:02:01.761214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.132 [2024-12-11 15:02:01.761231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:19.132 [2024-12-11 15:02:01.766371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:19.132 [2024-12-11 15:02:01.766417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.132 [2024-12-11 15:02:01.766434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:19.132 [2024-12-11 15:02:01.771666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:19.132 [2024-12-11 15:02:01.771696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.132 [2024-12-11 15:02:01.771714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:19.132 [2024-12-11 15:02:01.777100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:19.132 [2024-12-11 15:02:01.777144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.132 [2024-12-11 15:02:01.777161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:19.132 [2024-12-11 15:02:01.781754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:19.132 [2024-12-11 15:02:01.781785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.132 [2024-12-11 15:02:01.781802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:19.132 [2024-12-11 15:02:01.786858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:19.132 [2024-12-11 15:02:01.786889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.132 [2024-12-11 15:02:01.786907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:19.132 [2024-12-11 15:02:01.792091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:19.132 [2024-12-11 15:02:01.792122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.132 [2024-12-11 15:02:01.792154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:19.132 [2024-12-11 15:02:01.796610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:19.132 [2024-12-11 15:02:01.796640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.132 [2024-12-11 15:02:01.796665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:19.132 [2024-12-11 15:02:01.802115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:19.132 [2024-12-11 15:02:01.802146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.132 [2024-12-11 15:02:01.802163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:19.132 [2024-12-11 15:02:01.808233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:19.132 [2024-12-11 15:02:01.808264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.132 [2024-12-11 15:02:01.808281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:19.132 [2024-12-11 15:02:01.813698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:19.132 [2024-12-11 15:02:01.813729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.132 [2024-12-11 15:02:01.813747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:19.132 [2024-12-11 15:02:01.819289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:19.132 [2024-12-11 15:02:01.819320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.132 [2024-12-11 15:02:01.819338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:19.132 [2024-12-11 15:02:01.825427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:19.132 [2024-12-11 15:02:01.825459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.132 [2024-12-11 15:02:01.825477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:19.132 [2024-12-11 15:02:01.831688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:19.132 [2024-12-11 15:02:01.831720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.132 [2024-12-11 15:02:01.831738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:19.132 [2024-12-11 15:02:01.837659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:19.132 [2024-12-11 15:02:01.837690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.132 [2024-12-11 15:02:01.837709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:19.132 [2024-12-11 15:02:01.842961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:19.132 [2024-12-11 15:02:01.842992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.132 [2024-12-11 15:02:01.843010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:19.132 [2024-12-11 15:02:01.847583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:19.132 [2024-12-11 15:02:01.847620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.132 [2024-12-11 15:02:01.847638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:19.132 [2024-12-11 15:02:01.852262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:19.132 [2024-12-11 15:02:01.852291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.132 [2024-12-11 15:02:01.852308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:19.132 [2024-12-11 15:02:01.857019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:19.132 [2024-12-11 15:02:01.857049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.132 [2024-12-11 15:02:01.857066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:19.132 [2024-12-11 15:02:01.861686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:19.132 [2024-12-11 15:02:01.861716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.132 [2024-12-11 15:02:01.861733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:19.132 [2024-12-11 15:02:01.866381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:19.132 [2024-12-11 15:02:01.866411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.132 [2024-12-11 15:02:01.866442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:19.132 [2024-12-11 15:02:01.871036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:19.132 [2024-12-11 15:02:01.871066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.133 [2024-12-11 15:02:01.871083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:19.133 [2024-12-11 15:02:01.875739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:19.133 [2024-12-11 15:02:01.875769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.133 [2024-12-11 15:02:01.875786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:19.133 [2024-12-11 15:02:01.880785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:19.133 [2024-12-11 15:02:01.880817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.133 [2024-12-11 15:02:01.880835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:19.133 [2024-12-11 15:02:01.885722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:19.133 [2024-12-11 15:02:01.885752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.133 [2024-12-11 15:02:01.885770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:19.133 [2024-12-11 15:02:01.890811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:19.133 [2024-12-11 15:02:01.890842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.133 [2024-12-11 15:02:01.890859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:19.133 [2024-12-11 15:02:01.895667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:19.133 [2024-12-11 15:02:01.895698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.133 [2024-12-11 15:02:01.895715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:19.133 [2024-12-11 15:02:01.899586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:19.133 [2024-12-11 15:02:01.899617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.133 [2024-12-11 15:02:01.899635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:19.391 [2024-12-11 15:02:01.906240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:19.391 [2024-12-11 15:02:01.906272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.391 [2024-12-11 15:02:01.906290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:19.391 [2024-12-11 15:02:01.913869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:19.391 [2024-12-11 15:02:01.913900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.391 [2024-12-11 15:02:01.913933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:19.391 [2024-12-11 15:02:01.921441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8338a0) 00:25:19.391 [2024-12-11 15:02:01.921471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.391 [2024-12-11 15:02:01.921503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:19.391 5580.00 IOPS, 697.50 MiB/s 00:25:19.391 Latency(us) 00:25:19.391 [2024-12-11T14:02:02.164Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:19.391 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:25:19.391 nvme0n1 : 2.00 5579.51 697.44 0.00 0.00 2863.38 682.67 8495.41 00:25:19.391 [2024-12-11T14:02:02.164Z] =================================================================================================================== 00:25:19.391 [2024-12-11T14:02:02.164Z] Total : 5579.51 697.44 0.00 0.00 2863.38 682.67 8495.41 00:25:19.391 { 00:25:19.391 "results": [ 00:25:19.391 { 00:25:19.391 "job": "nvme0n1", 00:25:19.391 "core_mask": "0x2", 00:25:19.391 "workload": "randread", 00:25:19.391 "status": "finished", 00:25:19.391 "queue_depth": 16, 00:25:19.391 "io_size": 131072, 00:25:19.391 "runtime": 2.003043, 00:25:19.391 "iops": 5579.510774356816, 00:25:19.391 "mibps": 697.438846794602, 00:25:19.391 "io_failed": 0, 00:25:19.391 "io_timeout": 0, 00:25:19.391 "avg_latency_us": 2863.378223176648, 00:25:19.391 "min_latency_us": 682.6666666666666, 00:25:19.391 "max_latency_us": 8495.407407407407 00:25:19.391 } 00:25:19.391 ], 00:25:19.391 "core_count": 1 00:25:19.391 } 00:25:19.391 15:02:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:19.391 15:02:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:19.391 15:02:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:19.391 15:02:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:19.391 | .driver_specific 00:25:19.391 | .nvme_error 00:25:19.391 | .status_code 00:25:19.391 | .command_transient_transport_error' 00:25:19.649 15:02:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 361 > 0 )) 00:25:19.649 15:02:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 775832 00:25:19.649 15:02:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 775832 ']' 00:25:19.649 15:02:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 775832 00:25:19.649 15:02:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:25:19.649 15:02:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:19.649 15:02:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 775832 00:25:19.649 15:02:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:19.649 15:02:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:19.649 15:02:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 775832' 00:25:19.649 killing process with pid 775832 00:25:19.649 15:02:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 775832 00:25:19.649 Received shutdown signal, test time was about 2.000000 seconds 00:25:19.649 00:25:19.649 Latency(us) 00:25:19.649 [2024-12-11T14:02:02.422Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:19.649 [2024-12-11T14:02:02.422Z] =================================================================================================================== 00:25:19.649 [2024-12-11T14:02:02.422Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:19.649 15:02:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 775832 00:25:19.907 15:02:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:25:19.907 15:02:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:19.907 15:02:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:25:19.907 15:02:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:25:19.907 15:02:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:25:19.907 15:02:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=776329 00:25:19.907 15:02:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:25:19.907 15:02:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 776329 /var/tmp/bperf.sock 00:25:19.907 15:02:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 776329 ']' 00:25:19.907 15:02:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:19.907 15:02:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:19.907 15:02:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:19.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:19.907 15:02:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:19.907 15:02:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:19.907 [2024-12-11 15:02:02.534739] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:25:19.907 [2024-12-11 15:02:02.534827] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid776329 ] 00:25:19.908 [2024-12-11 15:02:02.602762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:19.908 [2024-12-11 15:02:02.660933] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:25:20.165 15:02:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:20.165 15:02:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:25:20.165 15:02:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:20.165 15:02:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:20.423 15:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:20.423 15:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.423 15:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:20.423 15:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.423 15:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:20.423 15:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:20.989 nvme0n1 00:25:20.989 15:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:25:20.989 15:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.989 15:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:20.989 15:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.989 15:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:20.989 15:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:20.989 Running I/O for 2 seconds... 00:25:20.989 [2024-12-11 15:02:03.661999] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ee23b8 00:25:20.989 [2024-12-11 15:02:03.663557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:13539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.989 [2024-12-11 15:02:03.663612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:20.989 [2024-12-11 15:02:03.671570] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ee6300 00:25:20.989 [2024-12-11 15:02:03.672441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:20306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.989 [2024-12-11 15:02:03.672486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:20.989 [2024-12-11 15:02:03.686111] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016efb048 00:25:20.989 [2024-12-11 15:02:03.687571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:8028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.989 [2024-12-11 15:02:03.687617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:20.989 [2024-12-11 15:02:03.697088] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef81e0 00:25:20.989 [2024-12-11 15:02:03.698317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:24919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.989 [2024-12-11 15:02:03.698348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:20.989 [2024-12-11 15:02:03.708844] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ee6300 00:25:20.989 [2024-12-11 15:02:03.709956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:21577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.989 [2024-12-11 15:02:03.710000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:20.989 [2024-12-11 15:02:03.719997] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ee5658 00:25:20.989 [2024-12-11 15:02:03.720913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:12867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.989 [2024-12-11 15:02:03.720942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:20.989 [2024-12-11 15:02:03.731133] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef9b30 00:25:20.989 [2024-12-11 15:02:03.731898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.989 [2024-12-11 15:02:03.731943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:20.989 [2024-12-11 15:02:03.746365] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ee1710 00:25:20.989 [2024-12-11 15:02:03.748139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:14071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.989 [2024-12-11 15:02:03.748183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:20.989 [2024-12-11 15:02:03.754742] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016eeb760 00:25:20.989 [2024-12-11 15:02:03.755554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:18124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:20.989 [2024-12-11 15:02:03.755597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:21.247 [2024-12-11 15:02:03.766534] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016eebb98 00:25:21.247 [2024-12-11 15:02:03.767426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:8348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.247 [2024-12-11 15:02:03.767469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:21.247 [2024-12-11 15:02:03.780130] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef81e0 00:25:21.247 [2024-12-11 15:02:03.781373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:7778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.247 [2024-12-11 15:02:03.781423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:21.247 [2024-12-11 15:02:03.793871] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ee6fa8 00:25:21.247 [2024-12-11 15:02:03.795767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.247 [2024-12-11 15:02:03.795810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:21.247 [2024-12-11 15:02:03.802210] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ee5658 00:25:21.247 [2024-12-11 15:02:03.803105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.247 [2024-12-11 15:02:03.803147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:21.247 [2024-12-11 15:02:03.813506] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016eedd58 00:25:21.247 [2024-12-11 15:02:03.814393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.247 [2024-12-11 15:02:03.814436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:21.247 [2024-12-11 15:02:03.825363] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016eeea00 00:25:21.247 [2024-12-11 15:02:03.826337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:23080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.247 [2024-12-11 15:02:03.826381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:21.247 [2024-12-11 15:02:03.839883] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016efbcf0 00:25:21.248 [2024-12-11 15:02:03.841475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:18285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.248 [2024-12-11 15:02:03.841519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:21.248 [2024-12-11 15:02:03.850576] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ee1b48 00:25:21.248 [2024-12-11 15:02:03.852337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:6530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.248 [2024-12-11 15:02:03.852366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:21.248 [2024-12-11 15:02:03.860563] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef1868 00:25:21.248 [2024-12-11 15:02:03.861415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:18450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.248 [2024-12-11 15:02:03.861456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:21.248 [2024-12-11 15:02:03.875985] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:21.248 [2024-12-11 15:02:03.876294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:17103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.248 [2024-12-11 15:02:03.876324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:21.248 [2024-12-11 15:02:03.890111] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:21.248 [2024-12-11 15:02:03.890358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:6063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.248 [2024-12-11 15:02:03.890403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:21.248 [2024-12-11 15:02:03.904064] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:21.248 [2024-12-11 15:02:03.904419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:21304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.248 [2024-12-11 15:02:03.904447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:21.248 [2024-12-11 15:02:03.918248] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:21.248 [2024-12-11 15:02:03.918456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.248 [2024-12-11 15:02:03.918484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:21.248 [2024-12-11 15:02:03.931529] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:21.248 [2024-12-11 15:02:03.931778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:21061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.248 [2024-12-11 15:02:03.931807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:21.248 [2024-12-11 15:02:03.945406] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:21.248 [2024-12-11 15:02:03.945746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:21497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.248 [2024-12-11 15:02:03.945775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:21.248 [2024-12-11 15:02:03.959526] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:21.248 [2024-12-11 15:02:03.959776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:24336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.248 [2024-12-11 15:02:03.959806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:21.248 [2024-12-11 15:02:03.973688] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:21.248 [2024-12-11 15:02:03.973966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:21280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.248 [2024-12-11 15:02:03.974009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:21.248 [2024-12-11 15:02:03.987728] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:21.248 [2024-12-11 15:02:03.988006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:24686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.248 [2024-12-11 15:02:03.988051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:21.248 [2024-12-11 15:02:04.001643] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:21.248 [2024-12-11 15:02:04.001894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.248 [2024-12-11 15:02:04.001922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:21.248 [2024-12-11 15:02:04.015890] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:21.248 [2024-12-11 15:02:04.016184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:23590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.248 [2024-12-11 15:02:04.016227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:21.506 [2024-12-11 15:02:04.029721] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:21.506 [2024-12-11 15:02:04.029987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:9756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.506 [2024-12-11 15:02:04.030015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:21.506 [2024-12-11 15:02:04.043877] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:21.506 [2024-12-11 15:02:04.044120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.506 [2024-12-11 15:02:04.044165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:21.506 [2024-12-11 15:02:04.058077] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:21.506 [2024-12-11 15:02:04.058338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:21830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.506 [2024-12-11 15:02:04.058366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:21.506 [2024-12-11 15:02:04.072051] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:21.506 [2024-12-11 15:02:04.072325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:15491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.506 [2024-12-11 15:02:04.072368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:21.506 [2024-12-11 15:02:04.086146] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:21.506 [2024-12-11 15:02:04.086405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:14898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.506 [2024-12-11 15:02:04.086433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:21.506 [2024-12-11 15:02:04.100192] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:21.506 [2024-12-11 15:02:04.100511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:6671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.506 [2024-12-11 15:02:04.100562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:21.506 [2024-12-11 15:02:04.114332] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:21.506 [2024-12-11 15:02:04.114603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:1844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.506 [2024-12-11 15:02:04.114632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:21.506 [2024-12-11 15:02:04.128417] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:21.506 [2024-12-11 15:02:04.128774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:22304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.506 [2024-12-11 15:02:04.128808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:21.506 [2024-12-11 15:02:04.142303] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:21.506 [2024-12-11 15:02:04.142592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:24456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.506 [2024-12-11 15:02:04.142621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:21.506 [2024-12-11 15:02:04.156341] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:21.506 [2024-12-11 15:02:04.156643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:8440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.506 [2024-12-11 15:02:04.156671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:21.506 [2024-12-11 15:02:04.170485] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:21.506 [2024-12-11 15:02:04.170701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.506 [2024-12-11 15:02:04.170730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:21.506 [2024-12-11 15:02:04.184186] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:21.507 [2024-12-11 15:02:04.184448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:9702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.507 [2024-12-11 15:02:04.184492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:21.507 [2024-12-11 15:02:04.197802] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:21.507 [2024-12-11 15:02:04.198122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:15540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.507 [2024-12-11 15:02:04.198151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:21.507 [2024-12-11 15:02:04.211815] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:21.507 [2024-12-11 15:02:04.212077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.507 [2024-12-11 15:02:04.212120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:21.507 [2024-12-11 15:02:04.225756] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:21.507 [2024-12-11 15:02:04.226002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:15541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.507 [2024-12-11 15:02:04.226031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:21.507 [2024-12-11 15:02:04.239562] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:21.507 [2024-12-11 15:02:04.239800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:14256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.507 [2024-12-11 15:02:04.239828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:21.507 [2024-12-11 15:02:04.253522] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:21.507 [2024-12-11 15:02:04.253776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:21690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.507 [2024-12-11 15:02:04.253804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:21.507 [2024-12-11 15:02:04.267642] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:21.507 [2024-12-11 15:02:04.267966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:24357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.507 [2024-12-11 15:02:04.268008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:21.765 [2024-12-11 15:02:04.281682] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:21.765 [2024-12-11 15:02:04.281936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:18386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.765 [2024-12-11 15:02:04.281979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:21.765 [2024-12-11 15:02:04.295767] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:21.765 [2024-12-11 15:02:04.296049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:8774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.765 [2024-12-11 15:02:04.296093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:21.765 [2024-12-11 15:02:04.310017] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:21.765 [2024-12-11 15:02:04.310275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.765 [2024-12-11 15:02:04.310303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:21.765 [2024-12-11 15:02:04.324142] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:21.765 [2024-12-11 15:02:04.324401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:10108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.765 [2024-12-11 15:02:04.324430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:21.765 [2024-12-11 15:02:04.338211] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:21.765 [2024-12-11 15:02:04.338496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:7493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.765 [2024-12-11 15:02:04.338524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:21.765 [2024-12-11 15:02:04.352323] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:21.765 [2024-12-11 15:02:04.352607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:18530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.765 [2024-12-11 15:02:04.352636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:21.765 [2024-12-11 15:02:04.366418] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:21.765 [2024-12-11 15:02:04.366742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:5023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.765 [2024-12-11 15:02:04.366771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:21.765 [2024-12-11 15:02:04.380638] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:21.765 [2024-12-11 15:02:04.380986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:4080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.765 [2024-12-11 15:02:04.381014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:21.765 [2024-12-11 15:02:04.394787] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:21.765 [2024-12-11 15:02:04.395048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:22889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.765 [2024-12-11 15:02:04.395091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:21.765 [2024-12-11 15:02:04.408883] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:21.765 [2024-12-11 15:02:04.409166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:17326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.765 [2024-12-11 15:02:04.409209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:21.765 [2024-12-11 15:02:04.422911] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:21.765 [2024-12-11 15:02:04.423171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.765 [2024-12-11 15:02:04.423200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:21.765 [2024-12-11 15:02:04.436642] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:21.765 [2024-12-11 15:02:04.436888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:3042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.765 [2024-12-11 15:02:04.436916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:21.765 [2024-12-11 15:02:04.450413] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:21.765 [2024-12-11 15:02:04.450665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:25103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.765 [2024-12-11 15:02:04.450694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:21.765 [2024-12-11 15:02:04.464610] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:21.765 [2024-12-11 15:02:04.464873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:23235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.765 [2024-12-11 15:02:04.464915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:21.765 [2024-12-11 15:02:04.478635] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:21.765 [2024-12-11 15:02:04.478896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:3360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.765 [2024-12-11 15:02:04.478924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:21.765 [2024-12-11 15:02:04.492506] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:21.765 [2024-12-11 15:02:04.492824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:23204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.765 [2024-12-11 15:02:04.492858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:21.765 [2024-12-11 15:02:04.506620] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:21.765 [2024-12-11 15:02:04.506913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:6258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.765 [2024-12-11 15:02:04.506956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:21.765 [2024-12-11 15:02:04.520939] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:21.765 [2024-12-11 15:02:04.521223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:5931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.765 [2024-12-11 15:02:04.521252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:21.765 [2024-12-11 15:02:04.534777] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:21.765 [2024-12-11 15:02:04.535035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:17425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.765 [2024-12-11 15:02:04.535079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:22.024 [2024-12-11 15:02:04.548569] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:22.024 [2024-12-11 15:02:04.548884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:21880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.024 [2024-12-11 15:02:04.548912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:22.024 [2024-12-11 15:02:04.562137] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:22.024 [2024-12-11 15:02:04.562424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:20289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.024 [2024-12-11 15:02:04.562467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:22.024 [2024-12-11 15:02:04.576210] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:22.024 [2024-12-11 15:02:04.576539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:23390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.024 [2024-12-11 15:02:04.576589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:22.024 [2024-12-11 15:02:04.590409] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:22.024 [2024-12-11 15:02:04.590685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:18733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.024 [2024-12-11 15:02:04.590713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:22.024 [2024-12-11 15:02:04.604703] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:22.024 [2024-12-11 15:02:04.604980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:10638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.024 [2024-12-11 15:02:04.605022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:22.024 [2024-12-11 15:02:04.618924] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:22.024 [2024-12-11 15:02:04.619300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:16536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.024 [2024-12-11 15:02:04.619329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:22.024 [2024-12-11 15:02:04.632997] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:22.024 [2024-12-11 15:02:04.633278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:16583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.024 [2024-12-11 15:02:04.633322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:22.024 18781.00 IOPS, 73.36 MiB/s [2024-12-11T14:02:04.797Z] [2024-12-11 15:02:04.647018] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:22.024 [2024-12-11 15:02:04.647296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:3723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.024 [2024-12-11 15:02:04.647325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:22.024 [2024-12-11 15:02:04.661220] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:22.024 [2024-12-11 15:02:04.661506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:23048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.024 [2024-12-11 15:02:04.661551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:22.024 [2024-12-11 15:02:04.675076] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:22.024 [2024-12-11 15:02:04.675314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:6149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.024 [2024-12-11 15:02:04.675352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:22.024 [2024-12-11 15:02:04.688513] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:22.024 [2024-12-11 15:02:04.688773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:17856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.024 [2024-12-11 15:02:04.688804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:22.024 [2024-12-11 15:02:04.702495] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:22.024 [2024-12-11 15:02:04.702792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.024 [2024-12-11 15:02:04.702821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:22.024 [2024-12-11 15:02:04.716836] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:22.024 [2024-12-11 15:02:04.717159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:20985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.024 [2024-12-11 15:02:04.717203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:22.024 [2024-12-11 15:02:04.731078] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:22.024 [2024-12-11 15:02:04.731393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:24710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.024 [2024-12-11 15:02:04.731421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:22.024 [2024-12-11 15:02:04.745175] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:22.024 [2024-12-11 15:02:04.745459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:2961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.024 [2024-12-11 15:02:04.745502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:22.024 [2024-12-11 15:02:04.759411] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:22.024 [2024-12-11 15:02:04.759697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.024 [2024-12-11 15:02:04.759726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:22.024 [2024-12-11 15:02:04.773444] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:22.024 [2024-12-11 15:02:04.773725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:16993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.024 [2024-12-11 15:02:04.773753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:22.024 [2024-12-11 15:02:04.787630] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:22.024 [2024-12-11 15:02:04.787888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:14984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.024 [2024-12-11 15:02:04.787916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:22.283 [2024-12-11 15:02:04.801696] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:22.283 [2024-12-11 15:02:04.801949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.283 [2024-12-11 15:02:04.801992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:22.283 [2024-12-11 15:02:04.815552] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:22.283 [2024-12-11 15:02:04.815826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:24467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.283 [2024-12-11 15:02:04.815854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:22.283 [2024-12-11 15:02:04.829641] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:22.283 [2024-12-11 15:02:04.829897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:12337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.283 [2024-12-11 15:02:04.829940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:22.283 [2024-12-11 15:02:04.843914] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:22.283 [2024-12-11 15:02:04.844259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.283 [2024-12-11 15:02:04.844286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:22.283 [2024-12-11 15:02:04.858153] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:22.283 [2024-12-11 15:02:04.858437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:24979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.283 [2024-12-11 15:02:04.858485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:22.283 [2024-12-11 15:02:04.872341] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:22.283 [2024-12-11 15:02:04.872675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:9613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.283 [2024-12-11 15:02:04.872703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:22.283 [2024-12-11 15:02:04.886698] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:22.283 [2024-12-11 15:02:04.887009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:4685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.283 [2024-12-11 15:02:04.887054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:22.283 [2024-12-11 15:02:04.900833] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:22.283 [2024-12-11 15:02:04.901108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:14326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.283 [2024-12-11 15:02:04.901152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:22.283 [2024-12-11 15:02:04.914961] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:22.283 [2024-12-11 15:02:04.915181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.283 [2024-12-11 15:02:04.915209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:22.283 [2024-12-11 15:02:04.928894] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:22.283 [2024-12-11 15:02:04.929177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:15556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.283 [2024-12-11 15:02:04.929205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:22.283 [2024-12-11 15:02:04.942380] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:22.283 [2024-12-11 15:02:04.942725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:9092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.283 [2024-12-11 15:02:04.942755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:22.283 [2024-12-11 15:02:04.956374] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:22.283 [2024-12-11 15:02:04.956646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:7534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.283 [2024-12-11 15:02:04.956675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:22.283 [2024-12-11 15:02:04.970559] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:22.283 [2024-12-11 15:02:04.970833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:12249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.283 [2024-12-11 15:02:04.970876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:22.283 [2024-12-11 15:02:04.984595] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:22.283 [2024-12-11 15:02:04.984864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:13295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.283 [2024-12-11 15:02:04.984907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:22.283 [2024-12-11 15:02:04.998824] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:22.283 [2024-12-11 15:02:04.999108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:10392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.283 [2024-12-11 15:02:04.999149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:22.283 [2024-12-11 15:02:05.013073] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:22.283 [2024-12-11 15:02:05.013405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:12412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.283 [2024-12-11 15:02:05.013434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:22.283 [2024-12-11 15:02:05.026891] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:22.283 [2024-12-11 15:02:05.027138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:19459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.283 [2024-12-11 15:02:05.027181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:22.283 [2024-12-11 15:02:05.040702] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:22.283 [2024-12-11 15:02:05.041004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:11825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.283 [2024-12-11 15:02:05.041047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:22.540 [2024-12-11 15:02:05.054581] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:22.540 [2024-12-11 15:02:05.054865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:19003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.540 [2024-12-11 15:02:05.054893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:22.540 [2024-12-11 15:02:05.068432] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:22.540 [2024-12-11 15:02:05.068708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:8895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.540 [2024-12-11 15:02:05.068735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:22.540 [2024-12-11 15:02:05.082419] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:22.540 [2024-12-11 15:02:05.082720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:12911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.540 [2024-12-11 15:02:05.082748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:22.540 [2024-12-11 15:02:05.096560] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:22.540 [2024-12-11 15:02:05.096836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:6432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.540 [2024-12-11 15:02:05.096879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:22.540 [2024-12-11 15:02:05.110842] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:22.540 [2024-12-11 15:02:05.111111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:1807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.540 [2024-12-11 15:02:05.111154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:22.540 [2024-12-11 15:02:05.125038] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:22.540 [2024-12-11 15:02:05.125295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:23364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.540 [2024-12-11 15:02:05.125337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:22.540 [2024-12-11 15:02:05.139100] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:22.540 [2024-12-11 15:02:05.139421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:17655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.540 [2024-12-11 15:02:05.139463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:22.540 [2024-12-11 15:02:05.153358] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:22.540 [2024-12-11 15:02:05.153613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:18115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.540 [2024-12-11 15:02:05.153642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:22.540 [2024-12-11 15:02:05.167570] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:22.540 [2024-12-11 15:02:05.167778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:24996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.540 [2024-12-11 15:02:05.167806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:22.540 [2024-12-11 15:02:05.181685] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:22.540 [2024-12-11 15:02:05.181984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:10378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.540 [2024-12-11 15:02:05.182012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:22.540 [2024-12-11 15:02:05.195276] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:22.540 [2024-12-11 15:02:05.195590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:16482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.540 [2024-12-11 15:02:05.195620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:22.540 [2024-12-11 15:02:05.209228] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:22.540 [2024-12-11 15:02:05.209519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:15246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.540 [2024-12-11 15:02:05.209571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:22.540 [2024-12-11 15:02:05.223535] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:22.540 [2024-12-11 15:02:05.223842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:3709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.540 [2024-12-11 15:02:05.223876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:22.540 [2024-12-11 15:02:05.237609] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:22.540 [2024-12-11 15:02:05.237888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:15995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.540 [2024-12-11 15:02:05.237915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:22.540 [2024-12-11 15:02:05.251672] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:22.540 [2024-12-11 15:02:05.252004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:2669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.540 [2024-12-11 15:02:05.252031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:22.540 [2024-12-11 15:02:05.265775] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:22.540 [2024-12-11 15:02:05.266072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:21606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.540 [2024-12-11 15:02:05.266099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:22.540 [2024-12-11 15:02:05.279839] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:22.540 [2024-12-11 15:02:05.280179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:16907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.540 [2024-12-11 15:02:05.280208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:22.540 [2024-12-11 15:02:05.293761] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:22.540 [2024-12-11 15:02:05.294057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.540 [2024-12-11 15:02:05.294085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:22.540 [2024-12-11 15:02:05.307745] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:22.541 [2024-12-11 15:02:05.308065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:5054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.541 [2024-12-11 15:02:05.308108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:22.798 [2024-12-11 15:02:05.321486] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:22.798 [2024-12-11 15:02:05.321804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:23715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.798 [2024-12-11 15:02:05.321833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:22.798 [2024-12-11 15:02:05.335034] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:22.798 [2024-12-11 15:02:05.335359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:22191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.798 [2024-12-11 15:02:05.335402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:22.798 [2024-12-11 15:02:05.349076] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:22.798 [2024-12-11 15:02:05.349365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:4942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.798 [2024-12-11 15:02:05.349393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:22.798 [2024-12-11 15:02:05.363422] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:22.798 [2024-12-11 15:02:05.363728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:16881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.798 [2024-12-11 15:02:05.363757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:22.798 [2024-12-11 15:02:05.377669] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:22.798 [2024-12-11 15:02:05.378002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:13320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.798 [2024-12-11 15:02:05.378044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:22.798 [2024-12-11 15:02:05.391834] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:22.798 [2024-12-11 15:02:05.392148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:11962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.798 [2024-12-11 15:02:05.392177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:22.798 [2024-12-11 15:02:05.406050] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:22.798 [2024-12-11 15:02:05.406323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:14136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.798 [2024-12-11 15:02:05.406365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:22.798 [2024-12-11 15:02:05.420219] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:22.798 [2024-12-11 15:02:05.420568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:2446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.798 [2024-12-11 15:02:05.420597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:22.798 [2024-12-11 15:02:05.434317] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:22.798 [2024-12-11 15:02:05.434576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:13516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.798 [2024-12-11 15:02:05.434606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:22.798 [2024-12-11 15:02:05.447922] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:22.798 [2024-12-11 15:02:05.448158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:12912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.798 [2024-12-11 15:02:05.448186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:22.798 [2024-12-11 15:02:05.462055] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:22.798 [2024-12-11 15:02:05.462265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:15847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.798 [2024-12-11 15:02:05.462308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:22.798 [2024-12-11 15:02:05.476262] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:22.798 [2024-12-11 15:02:05.476515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:23567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.798 [2024-12-11 15:02:05.476542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:22.798 [2024-12-11 15:02:05.490440] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:22.798 [2024-12-11 15:02:05.490691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:22932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.798 [2024-12-11 15:02:05.490720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:22.798 [2024-12-11 15:02:05.504431] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:22.798 [2024-12-11 15:02:05.504701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:22015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.798 [2024-12-11 15:02:05.504745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:22.798 [2024-12-11 15:02:05.518614] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:22.798 [2024-12-11 15:02:05.518824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:11382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.798 [2024-12-11 15:02:05.518853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:22.798 [2024-12-11 15:02:05.532468] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:22.798 [2024-12-11 15:02:05.532692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:11742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.798 [2024-12-11 15:02:05.532720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:22.798 [2024-12-11 15:02:05.546116] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:22.799 [2024-12-11 15:02:05.546352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:23413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.799 [2024-12-11 15:02:05.546379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:22.799 [2024-12-11 15:02:05.559942] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:22.799 [2024-12-11 15:02:05.560180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:21388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.799 [2024-12-11 15:02:05.560223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:23.056 [2024-12-11 15:02:05.573868] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:23.056 [2024-12-11 15:02:05.574109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:23923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.057 [2024-12-11 15:02:05.574136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:23.057 [2024-12-11 15:02:05.587700] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:23.057 [2024-12-11 15:02:05.587939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:18516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.057 [2024-12-11 15:02:05.587971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:23.057 [2024-12-11 15:02:05.601903] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:23.057 [2024-12-11 15:02:05.602142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:12277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.057 [2024-12-11 15:02:05.602172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:23.057 [2024-12-11 15:02:05.615922] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:23.057 [2024-12-11 15:02:05.616169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:14630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.057 [2024-12-11 15:02:05.616196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:23.057 [2024-12-11 15:02:05.629983] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:23.057 [2024-12-11 15:02:05.630208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:23260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.057 [2024-12-11 15:02:05.630234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:23.057 [2024-12-11 15:02:05.644046] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db7e40) with pdu=0x200016ef92c0 00:25:23.057 [2024-12-11 15:02:05.644294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:11195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.057 [2024-12-11 15:02:05.644321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:23.057 18494.50 IOPS, 72.24 MiB/s 00:25:23.057 Latency(us) 00:25:23.057 [2024-12-11T14:02:05.830Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:23.057 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:23.057 nvme0n1 : 2.01 18494.35 72.24 0.00 0.00 6904.60 2694.26 16311.18 00:25:23.057 [2024-12-11T14:02:05.830Z] =================================================================================================================== 00:25:23.057 [2024-12-11T14:02:05.830Z] Total : 18494.35 72.24 0.00 0.00 6904.60 2694.26 16311.18 00:25:23.057 { 00:25:23.057 "results": [ 00:25:23.057 { 00:25:23.057 "job": "nvme0n1", 00:25:23.057 "core_mask": "0x2", 00:25:23.057 "workload": "randwrite", 00:25:23.057 "status": "finished", 00:25:23.057 "queue_depth": 128, 00:25:23.057 "io_size": 4096, 00:25:23.057 "runtime": 2.008667, 00:25:23.057 "iops": 18494.35471384754, 00:25:23.057 "mibps": 72.24357310096696, 00:25:23.057 "io_failed": 0, 00:25:23.057 "io_timeout": 0, 00:25:23.057 "avg_latency_us": 6904.603714491093, 00:25:23.057 "min_latency_us": 2694.257777777778, 00:25:23.057 "max_latency_us": 16311.182222222222 00:25:23.057 } 00:25:23.057 ], 00:25:23.057 "core_count": 1 00:25:23.057 } 00:25:23.057 15:02:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:23.057 15:02:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:23.057 15:02:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:23.057 | .driver_specific 00:25:23.057 | .nvme_error 00:25:23.057 | .status_code 00:25:23.057 | .command_transient_transport_error' 00:25:23.057 15:02:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:23.315 15:02:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 145 > 0 )) 00:25:23.315 15:02:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 776329 00:25:23.315 15:02:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 776329 ']' 00:25:23.315 15:02:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 776329 00:25:23.315 15:02:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:25:23.315 15:02:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:23.315 15:02:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 776329 00:25:23.315 15:02:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:23.315 15:02:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:23.315 15:02:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 776329' 00:25:23.315 killing process with pid 776329 00:25:23.315 15:02:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 776329 00:25:23.315 Received shutdown signal, test time was about 2.000000 seconds 00:25:23.315 00:25:23.315 Latency(us) 00:25:23.315 [2024-12-11T14:02:06.088Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:23.315 [2024-12-11T14:02:06.088Z] =================================================================================================================== 00:25:23.315 [2024-12-11T14:02:06.088Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:23.315 15:02:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 776329 00:25:23.573 15:02:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:25:23.573 15:02:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:23.573 15:02:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:25:23.573 15:02:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:25:23.573 15:02:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:25:23.573 15:02:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=776767 00:25:23.573 15:02:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:25:23.573 15:02:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 776767 /var/tmp/bperf.sock 00:25:23.573 15:02:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 776767 ']' 00:25:23.573 15:02:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:23.573 15:02:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:23.573 15:02:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:23.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:23.573 15:02:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:23.573 15:02:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:23.573 [2024-12-11 15:02:06.254982] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:25:23.573 [2024-12-11 15:02:06.255061] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid776767 ] 00:25:23.573 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:23.573 Zero copy mechanism will not be used. 00:25:23.573 [2024-12-11 15:02:06.321760] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:23.831 [2024-12-11 15:02:06.380180] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:25:23.831 15:02:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:23.831 15:02:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:25:23.831 15:02:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:23.831 15:02:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:24.089 15:02:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:24.089 15:02:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.089 15:02:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:24.089 15:02:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.089 15:02:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:24.089 15:02:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:24.346 nvme0n1 00:25:24.605 15:02:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:25:24.605 15:02:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.605 15:02:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:24.605 15:02:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.605 15:02:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:24.605 15:02:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:24.605 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:24.605 Zero copy mechanism will not be used. 00:25:24.605 Running I/O for 2 seconds... 00:25:24.605 [2024-12-11 15:02:07.237004] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:24.605 [2024-12-11 15:02:07.237183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.605 [2024-12-11 15:02:07.237224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.605 [2024-12-11 15:02:07.243033] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:24.605 [2024-12-11 15:02:07.243135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.605 [2024-12-11 15:02:07.243168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.605 [2024-12-11 15:02:07.248392] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:24.605 [2024-12-11 15:02:07.248479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.605 [2024-12-11 15:02:07.248510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.605 [2024-12-11 15:02:07.253656] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:24.605 [2024-12-11 15:02:07.253755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.605 [2024-12-11 15:02:07.253785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.605 [2024-12-11 15:02:07.259064] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:24.605 [2024-12-11 15:02:07.259145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.605 [2024-12-11 15:02:07.259174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.605 [2024-12-11 15:02:07.264285] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:24.605 [2024-12-11 15:02:07.264371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.605 [2024-12-11 15:02:07.264400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.605 [2024-12-11 15:02:07.269325] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:24.605 [2024-12-11 15:02:07.269402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.605 [2024-12-11 15:02:07.269430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.605 [2024-12-11 15:02:07.275079] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:24.605 [2024-12-11 15:02:07.275152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.605 [2024-12-11 15:02:07.275185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.605 [2024-12-11 15:02:07.280692] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:24.605 [2024-12-11 15:02:07.280772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.605 [2024-12-11 15:02:07.280800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.605 [2024-12-11 15:02:07.286295] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:24.605 [2024-12-11 15:02:07.286367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.605 [2024-12-11 15:02:07.286395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.605 [2024-12-11 15:02:07.291961] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:24.605 [2024-12-11 15:02:07.292033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.605 [2024-12-11 15:02:07.292061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.605 [2024-12-11 15:02:07.297479] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:24.605 [2024-12-11 15:02:07.297571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.605 [2024-12-11 15:02:07.297600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.605 [2024-12-11 15:02:07.302792] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:24.605 [2024-12-11 15:02:07.302865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.605 [2024-12-11 15:02:07.302892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.605 [2024-12-11 15:02:07.308154] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:24.605 [2024-12-11 15:02:07.308230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.605 [2024-12-11 15:02:07.308258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.605 [2024-12-11 15:02:07.314167] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:24.605 [2024-12-11 15:02:07.314284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.605 [2024-12-11 15:02:07.314313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.605 [2024-12-11 15:02:07.319338] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:24.605 [2024-12-11 15:02:07.319415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.605 [2024-12-11 15:02:07.319444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.605 [2024-12-11 15:02:07.324447] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:24.605 [2024-12-11 15:02:07.324521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.605 [2024-12-11 15:02:07.324555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.605 [2024-12-11 15:02:07.329468] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:24.605 [2024-12-11 15:02:07.329569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.605 [2024-12-11 15:02:07.329598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.605 [2024-12-11 15:02:07.334431] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:24.605 [2024-12-11 15:02:07.334517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.605 [2024-12-11 15:02:07.334553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.605 [2024-12-11 15:02:07.339354] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:24.605 [2024-12-11 15:02:07.339436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.605 [2024-12-11 15:02:07.339465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.605 [2024-12-11 15:02:07.344347] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:24.605 [2024-12-11 15:02:07.344433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.606 [2024-12-11 15:02:07.344467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.606 [2024-12-11 15:02:07.349666] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:24.606 [2024-12-11 15:02:07.349757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.606 [2024-12-11 15:02:07.349786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.606 [2024-12-11 15:02:07.354945] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:24.606 [2024-12-11 15:02:07.355030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.606 [2024-12-11 15:02:07.355059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.606 [2024-12-11 15:02:07.360089] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:24.606 [2024-12-11 15:02:07.360175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.606 [2024-12-11 15:02:07.360204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.606 [2024-12-11 15:02:07.365188] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:24.606 [2024-12-11 15:02:07.365274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.606 [2024-12-11 15:02:07.365302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.606 [2024-12-11 15:02:07.370303] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:24.606 [2024-12-11 15:02:07.370382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.606 [2024-12-11 15:02:07.370409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.606 [2024-12-11 15:02:07.375798] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:24.606 [2024-12-11 15:02:07.375888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.606 [2024-12-11 15:02:07.375916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.864 [2024-12-11 15:02:07.381658] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:24.864 [2024-12-11 15:02:07.381733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.864 [2024-12-11 15:02:07.381760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.864 [2024-12-11 15:02:07.387156] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:24.864 [2024-12-11 15:02:07.387225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.864 [2024-12-11 15:02:07.387252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.864 [2024-12-11 15:02:07.392174] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:24.864 [2024-12-11 15:02:07.392259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.864 [2024-12-11 15:02:07.392288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.864 [2024-12-11 15:02:07.397305] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:24.864 [2024-12-11 15:02:07.397380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.864 [2024-12-11 15:02:07.397408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.864 [2024-12-11 15:02:07.402316] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:24.864 [2024-12-11 15:02:07.402397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.865 [2024-12-11 15:02:07.402425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.865 [2024-12-11 15:02:07.407282] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:24.865 [2024-12-11 15:02:07.407369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.865 [2024-12-11 15:02:07.407397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.865 [2024-12-11 15:02:07.412289] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:24.865 [2024-12-11 15:02:07.412376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.865 [2024-12-11 15:02:07.412405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.865 [2024-12-11 15:02:07.417323] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:24.865 [2024-12-11 15:02:07.417451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.865 [2024-12-11 15:02:07.417479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.865 [2024-12-11 15:02:07.423306] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:24.865 [2024-12-11 15:02:07.423477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.865 [2024-12-11 15:02:07.423506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.865 [2024-12-11 15:02:07.429832] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:24.865 [2024-12-11 15:02:07.430049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.865 [2024-12-11 15:02:07.430078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.865 [2024-12-11 15:02:07.436879] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:24.865 [2024-12-11 15:02:07.436978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.865 [2024-12-11 15:02:07.437007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.865 [2024-12-11 15:02:07.442292] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:24.865 [2024-12-11 15:02:07.442365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.865 [2024-12-11 15:02:07.442392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.865 [2024-12-11 15:02:07.447474] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:24.865 [2024-12-11 15:02:07.447631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.865 [2024-12-11 15:02:07.447660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.865 [2024-12-11 15:02:07.452971] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:24.865 [2024-12-11 15:02:07.453057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.865 [2024-12-11 15:02:07.453085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.865 [2024-12-11 15:02:07.458015] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:24.865 [2024-12-11 15:02:07.458111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.865 [2024-12-11 15:02:07.458140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.865 [2024-12-11 15:02:07.463281] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:24.865 [2024-12-11 15:02:07.463444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.865 [2024-12-11 15:02:07.463473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.865 [2024-12-11 15:02:07.469604] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:24.865 [2024-12-11 15:02:07.469739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.865 [2024-12-11 15:02:07.469768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.865 [2024-12-11 15:02:07.475951] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:24.865 [2024-12-11 15:02:07.476090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.865 [2024-12-11 15:02:07.476119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.865 [2024-12-11 15:02:07.482987] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:24.865 [2024-12-11 15:02:07.483141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.865 [2024-12-11 15:02:07.483170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.865 [2024-12-11 15:02:07.489791] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:24.865 [2024-12-11 15:02:07.489945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.865 [2024-12-11 15:02:07.489980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.865 [2024-12-11 15:02:07.495113] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:24.865 [2024-12-11 15:02:07.495188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.865 [2024-12-11 15:02:07.495217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.865 [2024-12-11 15:02:07.500180] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:24.865 [2024-12-11 15:02:07.500267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.865 [2024-12-11 15:02:07.500297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.865 [2024-12-11 15:02:07.505121] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:24.865 [2024-12-11 15:02:07.505220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.865 [2024-12-11 15:02:07.505249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.865 [2024-12-11 15:02:07.510172] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:24.865 [2024-12-11 15:02:07.510253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.865 [2024-12-11 15:02:07.510281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.865 [2024-12-11 15:02:07.515270] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:24.865 [2024-12-11 15:02:07.515356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.865 [2024-12-11 15:02:07.515385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.865 [2024-12-11 15:02:07.520403] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:24.865 [2024-12-11 15:02:07.520490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.865 [2024-12-11 15:02:07.520518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.865 [2024-12-11 15:02:07.525463] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:24.865 [2024-12-11 15:02:07.525561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.865 [2024-12-11 15:02:07.525590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.865 [2024-12-11 15:02:07.531142] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:24.865 [2024-12-11 15:02:07.531227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.865 [2024-12-11 15:02:07.531255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.865 [2024-12-11 15:02:07.536780] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:24.865 [2024-12-11 15:02:07.536862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.865 [2024-12-11 15:02:07.536892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.865 [2024-12-11 15:02:07.542591] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:24.865 [2024-12-11 15:02:07.542664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.865 [2024-12-11 15:02:07.542691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.865 [2024-12-11 15:02:07.548424] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:24.865 [2024-12-11 15:02:07.548498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.865 [2024-12-11 15:02:07.548525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.865 [2024-12-11 15:02:07.553431] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:24.865 [2024-12-11 15:02:07.553521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.865 [2024-12-11 15:02:07.553558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.865 [2024-12-11 15:02:07.558349] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:24.865 [2024-12-11 15:02:07.558421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.866 [2024-12-11 15:02:07.558448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.866 [2024-12-11 15:02:07.563391] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:24.866 [2024-12-11 15:02:07.563478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.866 [2024-12-11 15:02:07.563507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.866 [2024-12-11 15:02:07.568392] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:24.866 [2024-12-11 15:02:07.568469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.866 [2024-12-11 15:02:07.568498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.866 [2024-12-11 15:02:07.573355] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:24.866 [2024-12-11 15:02:07.573477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.866 [2024-12-11 15:02:07.573506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.866 [2024-12-11 15:02:07.578397] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:24.866 [2024-12-11 15:02:07.578485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.866 [2024-12-11 15:02:07.578513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.866 [2024-12-11 15:02:07.583471] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:24.866 [2024-12-11 15:02:07.583579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.866 [2024-12-11 15:02:07.583607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.866 [2024-12-11 15:02:07.588495] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:24.866 [2024-12-11 15:02:07.588566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.866 [2024-12-11 15:02:07.588593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.866 [2024-12-11 15:02:07.593336] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:24.866 [2024-12-11 15:02:07.593412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.866 [2024-12-11 15:02:07.593440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.866 [2024-12-11 15:02:07.598359] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:24.866 [2024-12-11 15:02:07.598461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.866 [2024-12-11 15:02:07.598489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.866 [2024-12-11 15:02:07.603323] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:24.866 [2024-12-11 15:02:07.603417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.866 [2024-12-11 15:02:07.603445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.866 [2024-12-11 15:02:07.608213] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:24.866 [2024-12-11 15:02:07.608286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.866 [2024-12-11 15:02:07.608313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.866 [2024-12-11 15:02:07.613282] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:24.866 [2024-12-11 15:02:07.613361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.866 [2024-12-11 15:02:07.613389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:24.866 [2024-12-11 15:02:07.619084] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:24.866 [2024-12-11 15:02:07.619169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.866 [2024-12-11 15:02:07.619197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:24.866 [2024-12-11 15:02:07.624031] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:24.866 [2024-12-11 15:02:07.624105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.866 [2024-12-11 15:02:07.624138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:24.866 [2024-12-11 15:02:07.629102] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:24.866 [2024-12-11 15:02:07.629172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.866 [2024-12-11 15:02:07.629199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:24.866 [2024-12-11 15:02:07.634226] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:24.866 [2024-12-11 15:02:07.634306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.866 [2024-12-11 15:02:07.634334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.125 [2024-12-11 15:02:07.639212] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.125 [2024-12-11 15:02:07.639297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.125 [2024-12-11 15:02:07.639326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.125 [2024-12-11 15:02:07.644237] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.125 [2024-12-11 15:02:07.644332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.125 [2024-12-11 15:02:07.644361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.125 [2024-12-11 15:02:07.649282] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.125 [2024-12-11 15:02:07.649363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.125 [2024-12-11 15:02:07.649391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.125 [2024-12-11 15:02:07.654325] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.125 [2024-12-11 15:02:07.654417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.125 [2024-12-11 15:02:07.654445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.125 [2024-12-11 15:02:07.659261] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.125 [2024-12-11 15:02:07.659350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.125 [2024-12-11 15:02:07.659378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.125 [2024-12-11 15:02:07.664311] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.125 [2024-12-11 15:02:07.664398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.125 [2024-12-11 15:02:07.664427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.125 [2024-12-11 15:02:07.669370] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.125 [2024-12-11 15:02:07.669470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.125 [2024-12-11 15:02:07.669499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.125 [2024-12-11 15:02:07.674413] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.125 [2024-12-11 15:02:07.674496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.125 [2024-12-11 15:02:07.674524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.125 [2024-12-11 15:02:07.679347] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.125 [2024-12-11 15:02:07.679429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.125 [2024-12-11 15:02:07.679456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.125 [2024-12-11 15:02:07.684684] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.125 [2024-12-11 15:02:07.684770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.125 [2024-12-11 15:02:07.684799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.125 [2024-12-11 15:02:07.690292] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.125 [2024-12-11 15:02:07.690375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.125 [2024-12-11 15:02:07.690407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.125 [2024-12-11 15:02:07.696029] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.125 [2024-12-11 15:02:07.696105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.125 [2024-12-11 15:02:07.696132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.126 [2024-12-11 15:02:07.701155] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.126 [2024-12-11 15:02:07.701232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.126 [2024-12-11 15:02:07.701259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.126 [2024-12-11 15:02:07.706065] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.126 [2024-12-11 15:02:07.706147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.126 [2024-12-11 15:02:07.706175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.126 [2024-12-11 15:02:07.711171] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.126 [2024-12-11 15:02:07.711259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.126 [2024-12-11 15:02:07.711288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.126 [2024-12-11 15:02:07.716102] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.126 [2024-12-11 15:02:07.716232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.126 [2024-12-11 15:02:07.716260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.126 [2024-12-11 15:02:07.721562] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.126 [2024-12-11 15:02:07.721695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.126 [2024-12-11 15:02:07.721723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.126 [2024-12-11 15:02:07.727827] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.126 [2024-12-11 15:02:07.728010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.126 [2024-12-11 15:02:07.728039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.126 [2024-12-11 15:02:07.734370] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.126 [2024-12-11 15:02:07.734518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.126 [2024-12-11 15:02:07.734552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.126 [2024-12-11 15:02:07.741724] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.126 [2024-12-11 15:02:07.741868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.126 [2024-12-11 15:02:07.741898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.126 [2024-12-11 15:02:07.748629] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.126 [2024-12-11 15:02:07.748785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.126 [2024-12-11 15:02:07.748816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.126 [2024-12-11 15:02:07.755803] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.126 [2024-12-11 15:02:07.755928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.126 [2024-12-11 15:02:07.755958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.126 [2024-12-11 15:02:07.763208] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.126 [2024-12-11 15:02:07.763313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.126 [2024-12-11 15:02:07.763342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.126 [2024-12-11 15:02:07.770833] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.126 [2024-12-11 15:02:07.771009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.126 [2024-12-11 15:02:07.771043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.126 [2024-12-11 15:02:07.778144] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.126 [2024-12-11 15:02:07.778257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.126 [2024-12-11 15:02:07.778286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.126 [2024-12-11 15:02:07.785190] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.126 [2024-12-11 15:02:07.785302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.126 [2024-12-11 15:02:07.785331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.126 [2024-12-11 15:02:07.792714] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.126 [2024-12-11 15:02:07.792840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.126 [2024-12-11 15:02:07.792870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.126 [2024-12-11 15:02:07.799459] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.126 [2024-12-11 15:02:07.799709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.126 [2024-12-11 15:02:07.799739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.126 [2024-12-11 15:02:07.806541] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.126 [2024-12-11 15:02:07.806959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.126 [2024-12-11 15:02:07.806987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.126 [2024-12-11 15:02:07.813591] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.126 [2024-12-11 15:02:07.813929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.126 [2024-12-11 15:02:07.813974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.126 [2024-12-11 15:02:07.819970] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.126 [2024-12-11 15:02:07.820287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.126 [2024-12-11 15:02:07.820315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.126 [2024-12-11 15:02:07.824864] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.126 [2024-12-11 15:02:07.825191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.126 [2024-12-11 15:02:07.825220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.126 [2024-12-11 15:02:07.829470] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.126 [2024-12-11 15:02:07.829729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.126 [2024-12-11 15:02:07.829757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.126 [2024-12-11 15:02:07.834013] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.126 [2024-12-11 15:02:07.834202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.126 [2024-12-11 15:02:07.834231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.126 [2024-12-11 15:02:07.838898] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.126 [2024-12-11 15:02:07.839205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.126 [2024-12-11 15:02:07.839233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.126 [2024-12-11 15:02:07.844482] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.126 [2024-12-11 15:02:07.844797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.126 [2024-12-11 15:02:07.844826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.126 [2024-12-11 15:02:07.849985] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.126 [2024-12-11 15:02:07.850280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.126 [2024-12-11 15:02:07.850309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.126 [2024-12-11 15:02:07.855905] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.126 [2024-12-11 15:02:07.856110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.126 [2024-12-11 15:02:07.856138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.126 [2024-12-11 15:02:07.861569] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.126 [2024-12-11 15:02:07.861851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.126 [2024-12-11 15:02:07.861879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.126 [2024-12-11 15:02:07.866902] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.126 [2024-12-11 15:02:07.867184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.127 [2024-12-11 15:02:07.867213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.127 [2024-12-11 15:02:07.872231] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.127 [2024-12-11 15:02:07.872525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.127 [2024-12-11 15:02:07.872560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.127 [2024-12-11 15:02:07.877585] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.127 [2024-12-11 15:02:07.877857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.127 [2024-12-11 15:02:07.877885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.127 [2024-12-11 15:02:07.882916] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.127 [2024-12-11 15:02:07.883175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.127 [2024-12-11 15:02:07.883203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.127 [2024-12-11 15:02:07.888374] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.127 [2024-12-11 15:02:07.888668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.127 [2024-12-11 15:02:07.888697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.127 [2024-12-11 15:02:07.893740] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.127 [2024-12-11 15:02:07.893979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.127 [2024-12-11 15:02:07.894009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.387 [2024-12-11 15:02:07.899029] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.387 [2024-12-11 15:02:07.899329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.387 [2024-12-11 15:02:07.899357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.387 [2024-12-11 15:02:07.904253] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.387 [2024-12-11 15:02:07.904519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.387 [2024-12-11 15:02:07.904558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.387 [2024-12-11 15:02:07.909563] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.387 [2024-12-11 15:02:07.909879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.387 [2024-12-11 15:02:07.909907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.387 [2024-12-11 15:02:07.914912] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.387 [2024-12-11 15:02:07.915195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.387 [2024-12-11 15:02:07.915224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.387 [2024-12-11 15:02:07.920135] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.387 [2024-12-11 15:02:07.920394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.387 [2024-12-11 15:02:07.920431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.387 [2024-12-11 15:02:07.925536] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.387 [2024-12-11 15:02:07.925891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.387 [2024-12-11 15:02:07.925920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.387 [2024-12-11 15:02:07.930920] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.387 [2024-12-11 15:02:07.931210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.387 [2024-12-11 15:02:07.931238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.387 [2024-12-11 15:02:07.936071] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.387 [2024-12-11 15:02:07.936402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.387 [2024-12-11 15:02:07.936430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.387 [2024-12-11 15:02:07.941201] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.387 [2024-12-11 15:02:07.941447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.387 [2024-12-11 15:02:07.941476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.387 [2024-12-11 15:02:07.945999] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.387 [2024-12-11 15:02:07.946163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.387 [2024-12-11 15:02:07.946191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.387 [2024-12-11 15:02:07.950803] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.387 [2024-12-11 15:02:07.951070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.387 [2024-12-11 15:02:07.951099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.387 [2024-12-11 15:02:07.955998] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.387 [2024-12-11 15:02:07.956291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.387 [2024-12-11 15:02:07.956319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.387 [2024-12-11 15:02:07.961636] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.387 [2024-12-11 15:02:07.961881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.387 [2024-12-11 15:02:07.961910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.387 [2024-12-11 15:02:07.967338] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.387 [2024-12-11 15:02:07.967650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.387 [2024-12-11 15:02:07.967679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.387 [2024-12-11 15:02:07.972430] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.387 [2024-12-11 15:02:07.972746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.387 [2024-12-11 15:02:07.972775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.387 [2024-12-11 15:02:07.977571] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.387 [2024-12-11 15:02:07.977808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.387 [2024-12-11 15:02:07.977836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.387 [2024-12-11 15:02:07.982780] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.387 [2024-12-11 15:02:07.982980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.387 [2024-12-11 15:02:07.983009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.387 [2024-12-11 15:02:07.988094] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.387 [2024-12-11 15:02:07.988330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.387 [2024-12-11 15:02:07.988359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.387 [2024-12-11 15:02:07.993210] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.387 [2024-12-11 15:02:07.993412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.388 [2024-12-11 15:02:07.993441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.388 [2024-12-11 15:02:07.998393] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.388 [2024-12-11 15:02:07.998618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.388 [2024-12-11 15:02:07.998653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.388 [2024-12-11 15:02:08.003465] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.388 [2024-12-11 15:02:08.003715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.388 [2024-12-11 15:02:08.003746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.388 [2024-12-11 15:02:08.008613] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.388 [2024-12-11 15:02:08.008929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.388 [2024-12-11 15:02:08.008958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.388 [2024-12-11 15:02:08.013638] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.388 [2024-12-11 15:02:08.013846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.388 [2024-12-11 15:02:08.013874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.388 [2024-12-11 15:02:08.018867] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.388 [2024-12-11 15:02:08.019090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.388 [2024-12-11 15:02:08.019118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.388 [2024-12-11 15:02:08.023923] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.388 [2024-12-11 15:02:08.024140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.388 [2024-12-11 15:02:08.024169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.388 [2024-12-11 15:02:08.029121] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.388 [2024-12-11 15:02:08.029349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.388 [2024-12-11 15:02:08.029377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.388 [2024-12-11 15:02:08.034218] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.388 [2024-12-11 15:02:08.034431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.388 [2024-12-11 15:02:08.034459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.388 [2024-12-11 15:02:08.039303] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.388 [2024-12-11 15:02:08.039503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.388 [2024-12-11 15:02:08.039532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.388 [2024-12-11 15:02:08.044636] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.388 [2024-12-11 15:02:08.044839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.388 [2024-12-11 15:02:08.044868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.388 [2024-12-11 15:02:08.049828] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.388 [2024-12-11 15:02:08.050047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.388 [2024-12-11 15:02:08.050076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.388 [2024-12-11 15:02:08.055028] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.388 [2024-12-11 15:02:08.055257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.388 [2024-12-11 15:02:08.055292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.388 [2024-12-11 15:02:08.060223] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.388 [2024-12-11 15:02:08.060488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.388 [2024-12-11 15:02:08.060517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.388 [2024-12-11 15:02:08.065331] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.388 [2024-12-11 15:02:08.065577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.388 [2024-12-11 15:02:08.065617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.388 [2024-12-11 15:02:08.070537] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.388 [2024-12-11 15:02:08.070787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.388 [2024-12-11 15:02:08.070816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.388 [2024-12-11 15:02:08.075838] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.388 [2024-12-11 15:02:08.076111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.388 [2024-12-11 15:02:08.076140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.388 [2024-12-11 15:02:08.081156] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.388 [2024-12-11 15:02:08.081447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.388 [2024-12-11 15:02:08.081476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.388 [2024-12-11 15:02:08.086371] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.388 [2024-12-11 15:02:08.086656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.388 [2024-12-11 15:02:08.086686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.388 [2024-12-11 15:02:08.091666] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.388 [2024-12-11 15:02:08.091874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.388 [2024-12-11 15:02:08.091902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.388 [2024-12-11 15:02:08.096836] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.388 [2024-12-11 15:02:08.097042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.388 [2024-12-11 15:02:08.097070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.388 [2024-12-11 15:02:08.101989] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.388 [2024-12-11 15:02:08.102280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.388 [2024-12-11 15:02:08.102309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.388 [2024-12-11 15:02:08.107230] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.388 [2024-12-11 15:02:08.107464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.388 [2024-12-11 15:02:08.107492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.388 [2024-12-11 15:02:08.112467] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.388 [2024-12-11 15:02:08.112691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.388 [2024-12-11 15:02:08.112719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.388 [2024-12-11 15:02:08.117706] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.388 [2024-12-11 15:02:08.118028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.388 [2024-12-11 15:02:08.118056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.388 [2024-12-11 15:02:08.122752] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.388 [2024-12-11 15:02:08.122938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.388 [2024-12-11 15:02:08.122967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.388 [2024-12-11 15:02:08.128051] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.388 [2024-12-11 15:02:08.128295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.388 [2024-12-11 15:02:08.128323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.388 [2024-12-11 15:02:08.133165] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.388 [2024-12-11 15:02:08.133366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.388 [2024-12-11 15:02:08.133395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.388 [2024-12-11 15:02:08.138488] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.389 [2024-12-11 15:02:08.138710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.389 [2024-12-11 15:02:08.138746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.389 [2024-12-11 15:02:08.143704] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.389 [2024-12-11 15:02:08.143911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.389 [2024-12-11 15:02:08.143939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.389 [2024-12-11 15:02:08.149035] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.389 [2024-12-11 15:02:08.149236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.389 [2024-12-11 15:02:08.149265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.389 [2024-12-11 15:02:08.154153] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.389 [2024-12-11 15:02:08.154477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.389 [2024-12-11 15:02:08.154506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.649 [2024-12-11 15:02:08.159314] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.649 [2024-12-11 15:02:08.159530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.649 [2024-12-11 15:02:08.159566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.649 [2024-12-11 15:02:08.164400] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.649 [2024-12-11 15:02:08.164629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.649 [2024-12-11 15:02:08.164664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.649 [2024-12-11 15:02:08.169564] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.649 [2024-12-11 15:02:08.169838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.649 [2024-12-11 15:02:08.169867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.649 [2024-12-11 15:02:08.174824] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.649 [2024-12-11 15:02:08.175075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.649 [2024-12-11 15:02:08.175104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.649 [2024-12-11 15:02:08.179932] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.649 [2024-12-11 15:02:08.180265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.649 [2024-12-11 15:02:08.180293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.649 [2024-12-11 15:02:08.185023] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.649 [2024-12-11 15:02:08.185340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.649 [2024-12-11 15:02:08.185369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.649 [2024-12-11 15:02:08.190084] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.649 [2024-12-11 15:02:08.190276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.649 [2024-12-11 15:02:08.190312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.649 [2024-12-11 15:02:08.195147] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.649 [2024-12-11 15:02:08.195345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.649 [2024-12-11 15:02:08.195374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.649 [2024-12-11 15:02:08.200309] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.649 [2024-12-11 15:02:08.200580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.649 [2024-12-11 15:02:08.200609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.649 [2024-12-11 15:02:08.205485] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.649 [2024-12-11 15:02:08.205786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.649 [2024-12-11 15:02:08.205815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.649 [2024-12-11 15:02:08.210642] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.649 [2024-12-11 15:02:08.210847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.649 [2024-12-11 15:02:08.210874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.649 [2024-12-11 15:02:08.215788] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.649 [2024-12-11 15:02:08.216079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.649 [2024-12-11 15:02:08.216108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.649 [2024-12-11 15:02:08.220958] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.649 [2024-12-11 15:02:08.221173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.649 [2024-12-11 15:02:08.221201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.649 [2024-12-11 15:02:08.226121] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.649 [2024-12-11 15:02:08.226338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.649 [2024-12-11 15:02:08.226366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.649 [2024-12-11 15:02:08.231243] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.649 [2024-12-11 15:02:08.231570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.649 [2024-12-11 15:02:08.231598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.649 [2024-12-11 15:02:08.236384] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.650 [2024-12-11 15:02:08.236652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.650 [2024-12-11 15:02:08.236680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.650 5743.00 IOPS, 717.88 MiB/s [2024-12-11T14:02:08.423Z] [2024-12-11 15:02:08.242316] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.650 [2024-12-11 15:02:08.242493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.650 [2024-12-11 15:02:08.242520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.650 [2024-12-11 15:02:08.246511] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.650 [2024-12-11 15:02:08.246704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.650 [2024-12-11 15:02:08.246734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.650 [2024-12-11 15:02:08.250715] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.650 [2024-12-11 15:02:08.250911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.650 [2024-12-11 15:02:08.250940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.650 [2024-12-11 15:02:08.255169] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.650 [2024-12-11 15:02:08.255333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.650 [2024-12-11 15:02:08.255363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.650 [2024-12-11 15:02:08.260518] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.650 [2024-12-11 15:02:08.260710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.650 [2024-12-11 15:02:08.260739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.650 [2024-12-11 15:02:08.264757] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.650 [2024-12-11 15:02:08.264910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.650 [2024-12-11 15:02:08.264938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.650 [2024-12-11 15:02:08.269478] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.650 [2024-12-11 15:02:08.269681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.650 [2024-12-11 15:02:08.269710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.650 [2024-12-11 15:02:08.274640] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.650 [2024-12-11 15:02:08.274878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.650 [2024-12-11 15:02:08.274906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.650 [2024-12-11 15:02:08.279787] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.650 [2024-12-11 15:02:08.279977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.650 [2024-12-11 15:02:08.280004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.650 [2024-12-11 15:02:08.285591] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.650 [2024-12-11 15:02:08.285834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.650 [2024-12-11 15:02:08.285862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.650 [2024-12-11 15:02:08.290966] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.650 [2024-12-11 15:02:08.291168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.650 [2024-12-11 15:02:08.291196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.650 [2024-12-11 15:02:08.296270] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.650 [2024-12-11 15:02:08.296496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.650 [2024-12-11 15:02:08.296525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.650 [2024-12-11 15:02:08.301530] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.650 [2024-12-11 15:02:08.301757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.650 [2024-12-11 15:02:08.301785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.650 [2024-12-11 15:02:08.306555] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.650 [2024-12-11 15:02:08.306794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.650 [2024-12-11 15:02:08.306822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.650 [2024-12-11 15:02:08.311638] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.650 [2024-12-11 15:02:08.311870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.650 [2024-12-11 15:02:08.311898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.650 [2024-12-11 15:02:08.316723] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.650 [2024-12-11 15:02:08.316929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.650 [2024-12-11 15:02:08.316957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.650 [2024-12-11 15:02:08.321956] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.650 [2024-12-11 15:02:08.322160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.650 [2024-12-11 15:02:08.322194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.650 [2024-12-11 15:02:08.327205] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.650 [2024-12-11 15:02:08.327492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.650 [2024-12-11 15:02:08.327520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.650 [2024-12-11 15:02:08.332337] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.650 [2024-12-11 15:02:08.332588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.650 [2024-12-11 15:02:08.332616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.650 [2024-12-11 15:02:08.337441] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.650 [2024-12-11 15:02:08.337660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.650 [2024-12-11 15:02:08.337689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.650 [2024-12-11 15:02:08.342693] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.650 [2024-12-11 15:02:08.342991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.650 [2024-12-11 15:02:08.343018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.650 [2024-12-11 15:02:08.347888] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.650 [2024-12-11 15:02:08.348064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.650 [2024-12-11 15:02:08.348092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.650 [2024-12-11 15:02:08.353362] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.650 [2024-12-11 15:02:08.353661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.650 [2024-12-11 15:02:08.353690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.650 [2024-12-11 15:02:08.358618] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.650 [2024-12-11 15:02:08.358780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.650 [2024-12-11 15:02:08.358809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.650 [2024-12-11 15:02:08.363819] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.650 [2024-12-11 15:02:08.364068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.650 [2024-12-11 15:02:08.364096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.650 [2024-12-11 15:02:08.369018] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.650 [2024-12-11 15:02:08.369194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.650 [2024-12-11 15:02:08.369222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.650 [2024-12-11 15:02:08.374165] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.650 [2024-12-11 15:02:08.374372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.650 [2024-12-11 15:02:08.374399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.651 [2024-12-11 15:02:08.379266] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.651 [2024-12-11 15:02:08.379574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.651 [2024-12-11 15:02:08.379603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.651 [2024-12-11 15:02:08.384395] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.651 [2024-12-11 15:02:08.384650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.651 [2024-12-11 15:02:08.384678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.651 [2024-12-11 15:02:08.389478] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.651 [2024-12-11 15:02:08.389734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.651 [2024-12-11 15:02:08.389763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.651 [2024-12-11 15:02:08.394581] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.651 [2024-12-11 15:02:08.394808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.651 [2024-12-11 15:02:08.394837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.651 [2024-12-11 15:02:08.399648] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.651 [2024-12-11 15:02:08.399885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.651 [2024-12-11 15:02:08.399913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.651 [2024-12-11 15:02:08.404776] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.651 [2024-12-11 15:02:08.404964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.651 [2024-12-11 15:02:08.404992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.651 [2024-12-11 15:02:08.409832] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.651 [2024-12-11 15:02:08.410048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.651 [2024-12-11 15:02:08.410076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.651 [2024-12-11 15:02:08.415025] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.651 [2024-12-11 15:02:08.415256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.651 [2024-12-11 15:02:08.415285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.911 [2024-12-11 15:02:08.420091] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.911 [2024-12-11 15:02:08.420319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.911 [2024-12-11 15:02:08.420348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.911 [2024-12-11 15:02:08.425313] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.911 [2024-12-11 15:02:08.425522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.911 [2024-12-11 15:02:08.425558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.911 [2024-12-11 15:02:08.430408] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.911 [2024-12-11 15:02:08.430615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.911 [2024-12-11 15:02:08.430644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.911 [2024-12-11 15:02:08.435537] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.911 [2024-12-11 15:02:08.435804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.911 [2024-12-11 15:02:08.435833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.911 [2024-12-11 15:02:08.440767] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.911 [2024-12-11 15:02:08.441050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.911 [2024-12-11 15:02:08.441079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.911 [2024-12-11 15:02:08.445849] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.911 [2024-12-11 15:02:08.446133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.911 [2024-12-11 15:02:08.446162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.911 [2024-12-11 15:02:08.451013] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.911 [2024-12-11 15:02:08.451220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.911 [2024-12-11 15:02:08.451248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.911 [2024-12-11 15:02:08.456080] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.911 [2024-12-11 15:02:08.456289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.911 [2024-12-11 15:02:08.456326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.911 [2024-12-11 15:02:08.461355] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.911 [2024-12-11 15:02:08.461647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.911 [2024-12-11 15:02:08.461677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.911 [2024-12-11 15:02:08.466447] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.911 [2024-12-11 15:02:08.466717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.911 [2024-12-11 15:02:08.466746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.911 [2024-12-11 15:02:08.471614] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.911 [2024-12-11 15:02:08.471922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.911 [2024-12-11 15:02:08.471951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.911 [2024-12-11 15:02:08.476707] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.911 [2024-12-11 15:02:08.476985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.911 [2024-12-11 15:02:08.477013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.911 [2024-12-11 15:02:08.481797] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.911 [2024-12-11 15:02:08.481969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.911 [2024-12-11 15:02:08.481997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.911 [2024-12-11 15:02:08.486791] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.911 [2024-12-11 15:02:08.486925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.911 [2024-12-11 15:02:08.486953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.911 [2024-12-11 15:02:08.491847] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.911 [2024-12-11 15:02:08.492009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.911 [2024-12-11 15:02:08.492037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.911 [2024-12-11 15:02:08.497035] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.911 [2024-12-11 15:02:08.497211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.911 [2024-12-11 15:02:08.497239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.911 [2024-12-11 15:02:08.502133] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.911 [2024-12-11 15:02:08.502293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.911 [2024-12-11 15:02:08.502323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.911 [2024-12-11 15:02:08.507183] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.911 [2024-12-11 15:02:08.507374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.911 [2024-12-11 15:02:08.507403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.911 [2024-12-11 15:02:08.512192] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.911 [2024-12-11 15:02:08.512339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.911 [2024-12-11 15:02:08.512367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.911 [2024-12-11 15:02:08.517389] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.911 [2024-12-11 15:02:08.517559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.911 [2024-12-11 15:02:08.517588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.911 [2024-12-11 15:02:08.522449] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.911 [2024-12-11 15:02:08.522623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.911 [2024-12-11 15:02:08.522652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.911 [2024-12-11 15:02:08.527559] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.911 [2024-12-11 15:02:08.527707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.911 [2024-12-11 15:02:08.527735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.912 [2024-12-11 15:02:08.532867] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.912 [2024-12-11 15:02:08.533043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.912 [2024-12-11 15:02:08.533070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.912 [2024-12-11 15:02:08.538011] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.912 [2024-12-11 15:02:08.538119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.912 [2024-12-11 15:02:08.538146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.912 [2024-12-11 15:02:08.543052] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.912 [2024-12-11 15:02:08.543199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.912 [2024-12-11 15:02:08.543226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.912 [2024-12-11 15:02:08.548138] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.912 [2024-12-11 15:02:08.548278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.912 [2024-12-11 15:02:08.548306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.912 [2024-12-11 15:02:08.553322] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.912 [2024-12-11 15:02:08.553488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.912 [2024-12-11 15:02:08.553515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.912 [2024-12-11 15:02:08.558284] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.912 [2024-12-11 15:02:08.558439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.912 [2024-12-11 15:02:08.558466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.912 [2024-12-11 15:02:08.563416] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.912 [2024-12-11 15:02:08.563522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.912 [2024-12-11 15:02:08.563557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.912 [2024-12-11 15:02:08.568581] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.912 [2024-12-11 15:02:08.568726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.912 [2024-12-11 15:02:08.568754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.912 [2024-12-11 15:02:08.573618] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.912 [2024-12-11 15:02:08.573814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.912 [2024-12-11 15:02:08.573841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.912 [2024-12-11 15:02:08.578603] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.912 [2024-12-11 15:02:08.578761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.912 [2024-12-11 15:02:08.578789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.912 [2024-12-11 15:02:08.583774] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.912 [2024-12-11 15:02:08.583958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.912 [2024-12-11 15:02:08.583987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.912 [2024-12-11 15:02:08.588875] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.912 [2024-12-11 15:02:08.589050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.912 [2024-12-11 15:02:08.589085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.912 [2024-12-11 15:02:08.593987] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.912 [2024-12-11 15:02:08.594131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.912 [2024-12-11 15:02:08.594159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.912 [2024-12-11 15:02:08.599013] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.912 [2024-12-11 15:02:08.599202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.912 [2024-12-11 15:02:08.599230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.912 [2024-12-11 15:02:08.604231] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.912 [2024-12-11 15:02:08.604403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.912 [2024-12-11 15:02:08.604432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.912 [2024-12-11 15:02:08.609284] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.912 [2024-12-11 15:02:08.609498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.912 [2024-12-11 15:02:08.609526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.912 [2024-12-11 15:02:08.614381] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.912 [2024-12-11 15:02:08.614581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.912 [2024-12-11 15:02:08.614610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.912 [2024-12-11 15:02:08.619482] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.912 [2024-12-11 15:02:08.619667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.912 [2024-12-11 15:02:08.619694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.912 [2024-12-11 15:02:08.624563] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.912 [2024-12-11 15:02:08.624743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.912 [2024-12-11 15:02:08.624770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.912 [2024-12-11 15:02:08.629623] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.912 [2024-12-11 15:02:08.629817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.912 [2024-12-11 15:02:08.629845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.912 [2024-12-11 15:02:08.634620] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.912 [2024-12-11 15:02:08.634774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.912 [2024-12-11 15:02:08.634802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.912 [2024-12-11 15:02:08.639708] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.912 [2024-12-11 15:02:08.639856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.912 [2024-12-11 15:02:08.639883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.912 [2024-12-11 15:02:08.644762] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.912 [2024-12-11 15:02:08.644933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.912 [2024-12-11 15:02:08.644961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.912 [2024-12-11 15:02:08.649841] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.912 [2024-12-11 15:02:08.650007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.912 [2024-12-11 15:02:08.650036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.912 [2024-12-11 15:02:08.654951] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.912 [2024-12-11 15:02:08.655082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.912 [2024-12-11 15:02:08.655110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.912 [2024-12-11 15:02:08.659990] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.912 [2024-12-11 15:02:08.660182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.912 [2024-12-11 15:02:08.660209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:25.912 [2024-12-11 15:02:08.665096] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.912 [2024-12-11 15:02:08.665254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.912 [2024-12-11 15:02:08.665282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:25.912 [2024-12-11 15:02:08.670106] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.912 [2024-12-11 15:02:08.670201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.913 [2024-12-11 15:02:08.670228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:25.913 [2024-12-11 15:02:08.675139] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.913 [2024-12-11 15:02:08.675279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.913 [2024-12-11 15:02:08.675307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:25.913 [2024-12-11 15:02:08.680295] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:25.913 [2024-12-11 15:02:08.680487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.913 [2024-12-11 15:02:08.680515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:26.173 [2024-12-11 15:02:08.685407] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.173 [2024-12-11 15:02:08.685586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.173 [2024-12-11 15:02:08.685615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:26.173 [2024-12-11 15:02:08.690460] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.173 [2024-12-11 15:02:08.690652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.173 [2024-12-11 15:02:08.690680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:26.173 [2024-12-11 15:02:08.695516] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.173 [2024-12-11 15:02:08.695735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.173 [2024-12-11 15:02:08.695764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:26.173 [2024-12-11 15:02:08.700536] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.173 [2024-12-11 15:02:08.700699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.173 [2024-12-11 15:02:08.700728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:26.173 [2024-12-11 15:02:08.705755] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.173 [2024-12-11 15:02:08.705899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.173 [2024-12-11 15:02:08.705927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:26.173 [2024-12-11 15:02:08.710693] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.173 [2024-12-11 15:02:08.710851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.173 [2024-12-11 15:02:08.710879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:26.173 [2024-12-11 15:02:08.715796] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.173 [2024-12-11 15:02:08.715924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.173 [2024-12-11 15:02:08.715952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:26.173 [2024-12-11 15:02:08.720824] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.173 [2024-12-11 15:02:08.721005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.173 [2024-12-11 15:02:08.721039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:26.173 [2024-12-11 15:02:08.725835] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.173 [2024-12-11 15:02:08.725967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.173 [2024-12-11 15:02:08.725996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:26.173 [2024-12-11 15:02:08.731011] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.173 [2024-12-11 15:02:08.731171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.173 [2024-12-11 15:02:08.731199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:26.173 [2024-12-11 15:02:08.736092] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.173 [2024-12-11 15:02:08.736260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.173 [2024-12-11 15:02:08.736289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:26.173 [2024-12-11 15:02:08.741047] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.173 [2024-12-11 15:02:08.741203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.173 [2024-12-11 15:02:08.741231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:26.173 [2024-12-11 15:02:08.746124] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.173 [2024-12-11 15:02:08.746287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.173 [2024-12-11 15:02:08.746315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:26.173 [2024-12-11 15:02:08.751088] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.173 [2024-12-11 15:02:08.751246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.173 [2024-12-11 15:02:08.751274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:26.173 [2024-12-11 15:02:08.756244] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.173 [2024-12-11 15:02:08.756434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.173 [2024-12-11 15:02:08.756463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:26.173 [2024-12-11 15:02:08.761257] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.173 [2024-12-11 15:02:08.761389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.173 [2024-12-11 15:02:08.761418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:26.173 [2024-12-11 15:02:08.766440] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.173 [2024-12-11 15:02:08.766609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.173 [2024-12-11 15:02:08.766638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:26.173 [2024-12-11 15:02:08.771412] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.173 [2024-12-11 15:02:08.771558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.173 [2024-12-11 15:02:08.771587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:26.173 [2024-12-11 15:02:08.776497] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.173 [2024-12-11 15:02:08.776620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.173 [2024-12-11 15:02:08.776648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:26.173 [2024-12-11 15:02:08.781581] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.173 [2024-12-11 15:02:08.781733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.173 [2024-12-11 15:02:08.781761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:26.174 [2024-12-11 15:02:08.786645] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.174 [2024-12-11 15:02:08.786806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.174 [2024-12-11 15:02:08.786834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:26.174 [2024-12-11 15:02:08.791713] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.174 [2024-12-11 15:02:08.791891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.174 [2024-12-11 15:02:08.791919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:26.174 [2024-12-11 15:02:08.796804] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.174 [2024-12-11 15:02:08.796962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.174 [2024-12-11 15:02:08.796990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:26.174 [2024-12-11 15:02:08.801895] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.174 [2024-12-11 15:02:08.802041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.174 [2024-12-11 15:02:08.802069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:26.174 [2024-12-11 15:02:08.806973] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.174 [2024-12-11 15:02:08.807127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.174 [2024-12-11 15:02:08.807156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:26.174 [2024-12-11 15:02:08.812051] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.174 [2024-12-11 15:02:08.812205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.174 [2024-12-11 15:02:08.812233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:26.174 [2024-12-11 15:02:08.817105] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.174 [2024-12-11 15:02:08.817288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.174 [2024-12-11 15:02:08.817316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:26.174 [2024-12-11 15:02:08.822161] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.174 [2024-12-11 15:02:08.822367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.174 [2024-12-11 15:02:08.822396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:26.174 [2024-12-11 15:02:08.827291] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.174 [2024-12-11 15:02:08.827441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.174 [2024-12-11 15:02:08.827469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:26.174 [2024-12-11 15:02:08.832442] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.174 [2024-12-11 15:02:08.832648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.174 [2024-12-11 15:02:08.832676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:26.174 [2024-12-11 15:02:08.837415] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.174 [2024-12-11 15:02:08.837597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.174 [2024-12-11 15:02:08.837627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:26.174 [2024-12-11 15:02:08.842539] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.174 [2024-12-11 15:02:08.842695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.174 [2024-12-11 15:02:08.842723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:26.174 [2024-12-11 15:02:08.847627] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.174 [2024-12-11 15:02:08.847772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.174 [2024-12-11 15:02:08.847800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:26.174 [2024-12-11 15:02:08.852682] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.174 [2024-12-11 15:02:08.852836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.174 [2024-12-11 15:02:08.852871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:26.174 [2024-12-11 15:02:08.857675] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.174 [2024-12-11 15:02:08.857817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.174 [2024-12-11 15:02:08.857845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:26.174 [2024-12-11 15:02:08.862750] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.174 [2024-12-11 15:02:08.862889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.174 [2024-12-11 15:02:08.862917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:26.174 [2024-12-11 15:02:08.867851] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.174 [2024-12-11 15:02:08.867965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.174 [2024-12-11 15:02:08.867993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:26.174 [2024-12-11 15:02:08.873002] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.174 [2024-12-11 15:02:08.873161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.174 [2024-12-11 15:02:08.873189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:26.174 [2024-12-11 15:02:08.878109] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.174 [2024-12-11 15:02:08.878264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.174 [2024-12-11 15:02:08.878293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:26.174 [2024-12-11 15:02:08.883313] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.174 [2024-12-11 15:02:08.883444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.174 [2024-12-11 15:02:08.883472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:26.174 [2024-12-11 15:02:08.888468] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.174 [2024-12-11 15:02:08.888662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.174 [2024-12-11 15:02:08.888690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:26.174 [2024-12-11 15:02:08.893524] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.174 [2024-12-11 15:02:08.893735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.174 [2024-12-11 15:02:08.893763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:26.174 [2024-12-11 15:02:08.898804] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.174 [2024-12-11 15:02:08.898956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.174 [2024-12-11 15:02:08.898985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:26.174 [2024-12-11 15:02:08.903909] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.174 [2024-12-11 15:02:08.904064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.174 [2024-12-11 15:02:08.904093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:26.174 [2024-12-11 15:02:08.909132] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.174 [2024-12-11 15:02:08.909252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.174 [2024-12-11 15:02:08.909281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:26.174 [2024-12-11 15:02:08.914176] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.174 [2024-12-11 15:02:08.914313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.174 [2024-12-11 15:02:08.914341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:26.174 [2024-12-11 15:02:08.919341] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.174 [2024-12-11 15:02:08.919524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.174 [2024-12-11 15:02:08.919562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:26.174 [2024-12-11 15:02:08.924526] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.175 [2024-12-11 15:02:08.924731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.175 [2024-12-11 15:02:08.924760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:26.175 [2024-12-11 15:02:08.929668] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.175 [2024-12-11 15:02:08.929808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.175 [2024-12-11 15:02:08.929836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:26.175 [2024-12-11 15:02:08.934656] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.175 [2024-12-11 15:02:08.934783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.175 [2024-12-11 15:02:08.934811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:26.175 [2024-12-11 15:02:08.939713] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.175 [2024-12-11 15:02:08.939852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.175 [2024-12-11 15:02:08.939880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:26.434 [2024-12-11 15:02:08.944934] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.434 [2024-12-11 15:02:08.945068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.434 [2024-12-11 15:02:08.945097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:26.434 [2024-12-11 15:02:08.949917] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.434 [2024-12-11 15:02:08.950134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.434 [2024-12-11 15:02:08.950162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:26.434 [2024-12-11 15:02:08.955042] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.434 [2024-12-11 15:02:08.955217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.434 [2024-12-11 15:02:08.955244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:26.434 [2024-12-11 15:02:08.960072] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.434 [2024-12-11 15:02:08.960169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.434 [2024-12-11 15:02:08.960197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:26.434 [2024-12-11 15:02:08.965136] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.434 [2024-12-11 15:02:08.965380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.434 [2024-12-11 15:02:08.965408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:26.434 [2024-12-11 15:02:08.970319] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.434 [2024-12-11 15:02:08.970460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.434 [2024-12-11 15:02:08.970487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:26.434 [2024-12-11 15:02:08.975611] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.434 [2024-12-11 15:02:08.975776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.434 [2024-12-11 15:02:08.975805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:26.434 [2024-12-11 15:02:08.980814] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.434 [2024-12-11 15:02:08.980997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.434 [2024-12-11 15:02:08.981026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:26.434 [2024-12-11 15:02:08.985938] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.434 [2024-12-11 15:02:08.986087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.434 [2024-12-11 15:02:08.986120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:26.434 [2024-12-11 15:02:08.991063] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.434 [2024-12-11 15:02:08.991172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.434 [2024-12-11 15:02:08.991200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:26.434 [2024-12-11 15:02:08.996205] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.434 [2024-12-11 15:02:08.996379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.434 [2024-12-11 15:02:08.996408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:26.434 [2024-12-11 15:02:09.001381] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.434 [2024-12-11 15:02:09.001577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.434 [2024-12-11 15:02:09.001607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:26.434 [2024-12-11 15:02:09.006780] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.434 [2024-12-11 15:02:09.006969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.434 [2024-12-11 15:02:09.006999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:26.434 [2024-12-11 15:02:09.011904] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.434 [2024-12-11 15:02:09.012033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.434 [2024-12-11 15:02:09.012062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:26.434 [2024-12-11 15:02:09.016967] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.434 [2024-12-11 15:02:09.017125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.434 [2024-12-11 15:02:09.017155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:26.434 [2024-12-11 15:02:09.022162] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.434 [2024-12-11 15:02:09.022327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.434 [2024-12-11 15:02:09.022356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:26.434 [2024-12-11 15:02:09.027386] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.434 [2024-12-11 15:02:09.027529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.434 [2024-12-11 15:02:09.027565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:26.434 [2024-12-11 15:02:09.032504] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.434 [2024-12-11 15:02:09.032741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.434 [2024-12-11 15:02:09.032770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:26.434 [2024-12-11 15:02:09.037683] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.434 [2024-12-11 15:02:09.037821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.434 [2024-12-11 15:02:09.037849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:26.434 [2024-12-11 15:02:09.042765] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.434 [2024-12-11 15:02:09.042914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.434 [2024-12-11 15:02:09.042942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:26.434 [2024-12-11 15:02:09.047850] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.435 [2024-12-11 15:02:09.047981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.435 [2024-12-11 15:02:09.048009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:26.435 [2024-12-11 15:02:09.052954] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.435 [2024-12-11 15:02:09.053068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.435 [2024-12-11 15:02:09.053095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:26.435 [2024-12-11 15:02:09.058025] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.435 [2024-12-11 15:02:09.058149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.435 [2024-12-11 15:02:09.058177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:26.435 [2024-12-11 15:02:09.063083] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.435 [2024-12-11 15:02:09.063231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.435 [2024-12-11 15:02:09.063259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:26.435 [2024-12-11 15:02:09.068236] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.435 [2024-12-11 15:02:09.068433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.435 [2024-12-11 15:02:09.068462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:26.435 [2024-12-11 15:02:09.073347] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.435 [2024-12-11 15:02:09.073508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.435 [2024-12-11 15:02:09.073538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:26.435 [2024-12-11 15:02:09.078376] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.435 [2024-12-11 15:02:09.078589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.435 [2024-12-11 15:02:09.078618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:26.435 [2024-12-11 15:02:09.083406] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.435 [2024-12-11 15:02:09.083559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.435 [2024-12-11 15:02:09.083598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:26.435 [2024-12-11 15:02:09.088472] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.435 [2024-12-11 15:02:09.088641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.435 [2024-12-11 15:02:09.088670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:26.435 [2024-12-11 15:02:09.093512] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.435 [2024-12-11 15:02:09.093707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.435 [2024-12-11 15:02:09.093744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:26.435 [2024-12-11 15:02:09.098647] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.435 [2024-12-11 15:02:09.098792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.435 [2024-12-11 15:02:09.098821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:26.435 [2024-12-11 15:02:09.103755] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.435 [2024-12-11 15:02:09.103869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.435 [2024-12-11 15:02:09.103897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:26.435 [2024-12-11 15:02:09.108939] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.435 [2024-12-11 15:02:09.109074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.435 [2024-12-11 15:02:09.109102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:26.435 [2024-12-11 15:02:09.114137] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.435 [2024-12-11 15:02:09.114286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.435 [2024-12-11 15:02:09.114314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:26.435 [2024-12-11 15:02:09.119270] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.435 [2024-12-11 15:02:09.119379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.435 [2024-12-11 15:02:09.119412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:26.435 [2024-12-11 15:02:09.124410] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.435 [2024-12-11 15:02:09.124576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.435 [2024-12-11 15:02:09.124605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:26.435 [2024-12-11 15:02:09.129666] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.435 [2024-12-11 15:02:09.129777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.435 [2024-12-11 15:02:09.129805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:26.435 [2024-12-11 15:02:09.134733] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.435 [2024-12-11 15:02:09.134871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.435 [2024-12-11 15:02:09.134899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:26.435 [2024-12-11 15:02:09.139801] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.435 [2024-12-11 15:02:09.139932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.435 [2024-12-11 15:02:09.139960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:26.435 [2024-12-11 15:02:09.144886] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.435 [2024-12-11 15:02:09.145021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.435 [2024-12-11 15:02:09.145049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:26.435 [2024-12-11 15:02:09.149972] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.435 [2024-12-11 15:02:09.150099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.435 [2024-12-11 15:02:09.150127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:26.435 [2024-12-11 15:02:09.155053] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.435 [2024-12-11 15:02:09.155192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.435 [2024-12-11 15:02:09.155219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:26.435 [2024-12-11 15:02:09.160221] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.435 [2024-12-11 15:02:09.160388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.435 [2024-12-11 15:02:09.160416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:26.435 [2024-12-11 15:02:09.165366] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.435 [2024-12-11 15:02:09.165463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.435 [2024-12-11 15:02:09.165491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:26.435 [2024-12-11 15:02:09.170478] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.435 [2024-12-11 15:02:09.170673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.435 [2024-12-11 15:02:09.170705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:26.435 [2024-12-11 15:02:09.175463] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.435 [2024-12-11 15:02:09.175639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.435 [2024-12-11 15:02:09.175667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:26.435 [2024-12-11 15:02:09.180576] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.435 [2024-12-11 15:02:09.180715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.435 [2024-12-11 15:02:09.180743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:26.435 [2024-12-11 15:02:09.185804] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.435 [2024-12-11 15:02:09.185913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.435 [2024-12-11 15:02:09.185941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:26.436 [2024-12-11 15:02:09.190836] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.436 [2024-12-11 15:02:09.190992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.436 [2024-12-11 15:02:09.191020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:26.436 [2024-12-11 15:02:09.195854] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.436 [2024-12-11 15:02:09.195956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.436 [2024-12-11 15:02:09.195983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:26.436 [2024-12-11 15:02:09.200980] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.436 [2024-12-11 15:02:09.201156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.436 [2024-12-11 15:02:09.201199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:26.694 [2024-12-11 15:02:09.206077] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.694 [2024-12-11 15:02:09.206242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.694 [2024-12-11 15:02:09.206270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:26.694 [2024-12-11 15:02:09.211271] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.694 [2024-12-11 15:02:09.211446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.694 [2024-12-11 15:02:09.211474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:26.694 [2024-12-11 15:02:09.216352] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.694 [2024-12-11 15:02:09.216522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.694 [2024-12-11 15:02:09.216566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:26.694 [2024-12-11 15:02:09.221466] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.694 [2024-12-11 15:02:09.221614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.694 [2024-12-11 15:02:09.221642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:26.694 [2024-12-11 15:02:09.226635] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.694 [2024-12-11 15:02:09.226808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.694 [2024-12-11 15:02:09.226837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:26.694 [2024-12-11 15:02:09.231602] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.694 [2024-12-11 15:02:09.231758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.694 [2024-12-11 15:02:09.231787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:26.694 [2024-12-11 15:02:09.236654] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.694 [2024-12-11 15:02:09.236865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.694 [2024-12-11 15:02:09.236893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:26.694 5915.50 IOPS, 739.44 MiB/s [2024-12-11T14:02:09.467Z] [2024-12-11 15:02:09.243015] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1db8180) with pdu=0x200016eff3c8 00:25:26.694 [2024-12-11 15:02:09.243199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.694 [2024-12-11 15:02:09.243227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:26.694 00:25:26.694 Latency(us) 00:25:26.694 [2024-12-11T14:02:09.467Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:26.694 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:25:26.694 nvme0n1 : 2.00 5913.22 739.15 0.00 0.00 2698.41 1868.99 7573.05 00:25:26.694 [2024-12-11T14:02:09.467Z] =================================================================================================================== 00:25:26.694 [2024-12-11T14:02:09.467Z] Total : 5913.22 739.15 0.00 0.00 2698.41 1868.99 7573.05 00:25:26.694 { 00:25:26.694 "results": [ 00:25:26.694 { 00:25:26.694 "job": "nvme0n1", 00:25:26.694 "core_mask": "0x2", 00:25:26.694 "workload": "randwrite", 00:25:26.694 "status": "finished", 00:25:26.694 "queue_depth": 16, 00:25:26.694 "io_size": 131072, 00:25:26.694 "runtime": 2.004324, 00:25:26.694 "iops": 5913.215627812669, 00:25:26.694 "mibps": 739.1519534765836, 00:25:26.694 "io_failed": 0, 00:25:26.694 "io_timeout": 0, 00:25:26.694 "avg_latency_us": 2698.4074859064262, 00:25:26.694 "min_latency_us": 1868.9896296296297, 00:25:26.694 "max_latency_us": 7573.0488888888885 00:25:26.694 } 00:25:26.694 ], 00:25:26.694 "core_count": 1 00:25:26.694 } 00:25:26.694 15:02:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:26.694 15:02:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:26.694 15:02:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:26.694 15:02:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:26.694 | .driver_specific 00:25:26.694 | .nvme_error 00:25:26.694 | .status_code 00:25:26.694 | .command_transient_transport_error' 00:25:26.952 15:02:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 383 > 0 )) 00:25:26.952 15:02:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 776767 00:25:26.953 15:02:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 776767 ']' 00:25:26.953 15:02:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 776767 00:25:26.953 15:02:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:25:26.953 15:02:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:26.953 15:02:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 776767 00:25:26.953 15:02:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:26.953 15:02:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:26.953 15:02:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 776767' 00:25:26.953 killing process with pid 776767 00:25:26.953 15:02:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 776767 00:25:26.953 Received shutdown signal, test time was about 2.000000 seconds 00:25:26.953 00:25:26.953 Latency(us) 00:25:26.953 [2024-12-11T14:02:09.726Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:26.953 [2024-12-11T14:02:09.726Z] =================================================================================================================== 00:25:26.953 [2024-12-11T14:02:09.726Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:26.953 15:02:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 776767 00:25:27.211 15:02:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 775391 00:25:27.211 15:02:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 775391 ']' 00:25:27.211 15:02:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 775391 00:25:27.211 15:02:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:25:27.211 15:02:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:27.211 15:02:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 775391 00:25:27.211 15:02:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:27.211 15:02:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:27.211 15:02:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 775391' 00:25:27.211 killing process with pid 775391 00:25:27.211 15:02:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 775391 00:25:27.211 15:02:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 775391 00:25:27.469 00:25:27.469 real 0m15.314s 00:25:27.469 user 0m30.857s 00:25:27.469 sys 0m4.203s 00:25:27.469 15:02:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:27.469 15:02:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:27.469 ************************************ 00:25:27.469 END TEST nvmf_digest_error 00:25:27.469 ************************************ 00:25:27.469 15:02:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:25:27.469 15:02:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:25:27.469 15:02:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:27.469 15:02:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:25:27.469 15:02:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:27.469 15:02:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:25:27.469 15:02:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:27.469 15:02:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:27.469 rmmod nvme_tcp 00:25:27.469 rmmod nvme_fabrics 00:25:27.469 rmmod nvme_keyring 00:25:27.469 15:02:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:27.469 15:02:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:25:27.469 15:02:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:25:27.469 15:02:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 775391 ']' 00:25:27.469 15:02:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 775391 00:25:27.470 15:02:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 775391 ']' 00:25:27.470 15:02:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 775391 00:25:27.470 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (775391) - No such process 00:25:27.470 15:02:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 775391 is not found' 00:25:27.470 Process with pid 775391 is not found 00:25:27.470 15:02:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:27.470 15:02:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:27.470 15:02:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:27.470 15:02:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:25:27.470 15:02:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:25:27.470 15:02:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:27.470 15:02:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:25:27.470 15:02:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:27.470 15:02:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:27.470 15:02:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:27.470 15:02:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:27.470 15:02:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:30.009 15:02:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:30.009 00:25:30.009 real 0m35.699s 00:25:30.009 user 1m3.399s 00:25:30.009 sys 0m10.228s 00:25:30.009 15:02:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:30.009 15:02:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:30.009 ************************************ 00:25:30.009 END TEST nvmf_digest 00:25:30.009 ************************************ 00:25:30.009 15:02:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:25:30.009 15:02:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:25:30.009 15:02:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:25:30.009 15:02:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:25:30.009 15:02:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:30.009 15:02:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:30.009 15:02:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.009 ************************************ 00:25:30.009 START TEST nvmf_bdevperf 00:25:30.009 ************************************ 00:25:30.009 15:02:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:25:30.009 * Looking for test storage... 00:25:30.009 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:30.009 15:02:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:30.009 15:02:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:25:30.009 15:02:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:30.009 15:02:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:30.009 15:02:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:30.009 15:02:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:30.009 15:02:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:30.009 15:02:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:25:30.009 15:02:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:25:30.009 15:02:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:25:30.009 15:02:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:25:30.009 15:02:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:25:30.009 15:02:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:25:30.009 15:02:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:25:30.009 15:02:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:30.009 15:02:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:25:30.009 15:02:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:25:30.009 15:02:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:30.009 15:02:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:30.009 15:02:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:25:30.009 15:02:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:25:30.009 15:02:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:30.009 15:02:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:25:30.009 15:02:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:25:30.009 15:02:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:25:30.009 15:02:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:25:30.009 15:02:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:30.009 15:02:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:25:30.009 15:02:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:25:30.009 15:02:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:30.009 15:02:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:30.009 15:02:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:25:30.009 15:02:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:30.009 15:02:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:30.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:30.009 --rc genhtml_branch_coverage=1 00:25:30.009 --rc genhtml_function_coverage=1 00:25:30.009 --rc genhtml_legend=1 00:25:30.009 --rc geninfo_all_blocks=1 00:25:30.009 --rc geninfo_unexecuted_blocks=1 00:25:30.009 00:25:30.009 ' 00:25:30.009 15:02:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:30.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:30.009 --rc genhtml_branch_coverage=1 00:25:30.009 --rc genhtml_function_coverage=1 00:25:30.009 --rc genhtml_legend=1 00:25:30.009 --rc geninfo_all_blocks=1 00:25:30.009 --rc geninfo_unexecuted_blocks=1 00:25:30.009 00:25:30.009 ' 00:25:30.009 15:02:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:30.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:30.009 --rc genhtml_branch_coverage=1 00:25:30.009 --rc genhtml_function_coverage=1 00:25:30.009 --rc genhtml_legend=1 00:25:30.009 --rc geninfo_all_blocks=1 00:25:30.009 --rc geninfo_unexecuted_blocks=1 00:25:30.009 00:25:30.009 ' 00:25:30.009 15:02:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:30.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:30.009 --rc genhtml_branch_coverage=1 00:25:30.009 --rc genhtml_function_coverage=1 00:25:30.009 --rc genhtml_legend=1 00:25:30.009 --rc geninfo_all_blocks=1 00:25:30.009 --rc geninfo_unexecuted_blocks=1 00:25:30.009 00:25:30.009 ' 00:25:30.009 15:02:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:30.009 15:02:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:25:30.009 15:02:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:30.009 15:02:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:30.009 15:02:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:30.009 15:02:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:30.009 15:02:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:30.009 15:02:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:30.009 15:02:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:30.009 15:02:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:30.009 15:02:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:30.009 15:02:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:30.009 15:02:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:30.009 15:02:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:30.009 15:02:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:30.009 15:02:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:30.009 15:02:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:30.010 15:02:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:30.010 15:02:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:30.010 15:02:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:25:30.010 15:02:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:30.010 15:02:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:30.010 15:02:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:30.010 15:02:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.010 15:02:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.010 15:02:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.010 15:02:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:25:30.010 15:02:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.010 15:02:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:25:30.010 15:02:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:30.010 15:02:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:30.010 15:02:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:30.010 15:02:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:30.010 15:02:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:30.010 15:02:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:30.010 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:30.010 15:02:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:30.010 15:02:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:30.010 15:02:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:30.010 15:02:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:30.010 15:02:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:30.010 15:02:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:25:30.010 15:02:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:30.010 15:02:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:30.010 15:02:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:30.010 15:02:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:30.010 15:02:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:30.010 15:02:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:30.010 15:02:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:30.010 15:02:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:30.010 15:02:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:30.010 15:02:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:30.010 15:02:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:25:30.010 15:02:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:31.916 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:31.916 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:25:31.916 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:31.916 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:31.916 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:31.916 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:31.916 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:31.916 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:25:31.916 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:31.916 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:25:31.916 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:25:31.916 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:25:31.916 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:25:31.916 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:25:31.916 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:25:31.916 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:31.916 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:31.916 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:31.916 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:31.916 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:31.916 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:31.916 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:31.916 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:31.916 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:31.916 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:31.916 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:31.916 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:31.916 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:31.916 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:31.916 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:31.916 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:31.916 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:31.916 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:31.916 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:31.916 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:31.916 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:31.916 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:31.916 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:31.916 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:31.916 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:31.916 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:31.916 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:31.916 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:31.916 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:31.916 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:31.916 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:31.916 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:31.916 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:31.916 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:31.916 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:31.916 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:31.916 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:31.916 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:31.916 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:31.916 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:31.916 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:31.916 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:31.916 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:31.916 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:31.916 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:31.916 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:31.916 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:31.916 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:31.916 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:31.916 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:31.916 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:31.916 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:31.916 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:31.916 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:31.916 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:31.916 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:31.916 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:31.916 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:31.916 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:25:31.916 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:31.916 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:31.916 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:31.916 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:31.916 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:31.916 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:31.916 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:31.916 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:31.916 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:31.916 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:31.916 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:31.916 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:31.916 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:31.916 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:31.916 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:31.916 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:31.916 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:31.916 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:31.916 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:31.916 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:31.917 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:31.917 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:31.917 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:32.175 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:32.175 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:32.175 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:32.175 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:32.175 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.288 ms 00:25:32.175 00:25:32.175 --- 10.0.0.2 ping statistics --- 00:25:32.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:32.175 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:25:32.175 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:32.175 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:32.175 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:25:32.175 00:25:32.175 --- 10.0.0.1 ping statistics --- 00:25:32.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:32.175 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:25:32.175 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:32.175 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:25:32.175 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:32.175 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:32.175 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:32.175 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:32.175 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:32.175 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:32.175 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:32.175 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:25:32.175 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:25:32.175 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:32.175 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:32.175 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:32.175 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=779127 00:25:32.175 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:32.175 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 779127 00:25:32.175 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 779127 ']' 00:25:32.175 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:32.175 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:32.175 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:32.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:32.175 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:32.175 15:02:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:32.175 [2024-12-11 15:02:14.787354] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:25:32.175 [2024-12-11 15:02:14.787453] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:32.175 [2024-12-11 15:02:14.862499] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:32.175 [2024-12-11 15:02:14.922141] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:32.175 [2024-12-11 15:02:14.922195] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:32.175 [2024-12-11 15:02:14.922220] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:32.175 [2024-12-11 15:02:14.922240] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:32.175 [2024-12-11 15:02:14.922257] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:32.175 [2024-12-11 15:02:14.923868] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:25:32.175 [2024-12-11 15:02:14.923893] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:25:32.175 [2024-12-11 15:02:14.923898] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:25:32.434 15:02:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:32.434 15:02:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:25:32.434 15:02:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:32.434 15:02:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:32.434 15:02:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:32.434 15:02:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:32.434 15:02:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:32.434 15:02:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.434 15:02:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:32.434 [2024-12-11 15:02:15.075569] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:32.434 15:02:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.434 15:02:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:32.434 15:02:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.434 15:02:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:32.434 Malloc0 00:25:32.434 15:02:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.434 15:02:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:32.434 15:02:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.434 15:02:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:32.434 15:02:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.434 15:02:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:32.434 15:02:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.434 15:02:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:32.434 15:02:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.434 15:02:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:32.434 15:02:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.434 15:02:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:32.434 [2024-12-11 15:02:15.134163] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:32.434 15:02:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.434 15:02:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:25:32.434 15:02:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:25:32.434 15:02:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:25:32.434 15:02:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:25:32.434 15:02:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:32.434 15:02:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:32.434 { 00:25:32.434 "params": { 00:25:32.434 "name": "Nvme$subsystem", 00:25:32.434 "trtype": "$TEST_TRANSPORT", 00:25:32.434 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:32.434 "adrfam": "ipv4", 00:25:32.434 "trsvcid": "$NVMF_PORT", 00:25:32.434 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:32.434 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:32.434 "hdgst": ${hdgst:-false}, 00:25:32.434 "ddgst": ${ddgst:-false} 00:25:32.434 }, 00:25:32.434 "method": "bdev_nvme_attach_controller" 00:25:32.434 } 00:25:32.434 EOF 00:25:32.434 )") 00:25:32.434 15:02:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:25:32.434 15:02:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:25:32.434 15:02:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:25:32.434 15:02:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:25:32.434 "params": { 00:25:32.434 "name": "Nvme1", 00:25:32.434 "trtype": "tcp", 00:25:32.434 "traddr": "10.0.0.2", 00:25:32.434 "adrfam": "ipv4", 00:25:32.434 "trsvcid": "4420", 00:25:32.434 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:32.434 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:32.434 "hdgst": false, 00:25:32.434 "ddgst": false 00:25:32.434 }, 00:25:32.434 "method": "bdev_nvme_attach_controller" 00:25:32.434 }' 00:25:32.434 [2024-12-11 15:02:15.186982] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:25:32.434 [2024-12-11 15:02:15.187047] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid779264 ] 00:25:32.692 [2024-12-11 15:02:15.255730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:32.692 [2024-12-11 15:02:15.316558] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:25:32.950 Running I/O for 1 seconds... 00:25:33.884 8335.00 IOPS, 32.56 MiB/s 00:25:33.884 Latency(us) 00:25:33.884 [2024-12-11T14:02:16.657Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:33.884 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:33.884 Verification LBA range: start 0x0 length 0x4000 00:25:33.884 Nvme1n1 : 1.05 8093.82 31.62 0.00 0.00 15192.63 3422.44 48545.19 00:25:33.884 [2024-12-11T14:02:16.657Z] =================================================================================================================== 00:25:33.884 [2024-12-11T14:02:16.657Z] Total : 8093.82 31.62 0.00 0.00 15192.63 3422.44 48545.19 00:25:34.142 15:02:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=779409 00:25:34.142 15:02:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:25:34.142 15:02:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:25:34.142 15:02:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:25:34.142 15:02:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:25:34.142 15:02:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:25:34.142 15:02:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:34.142 15:02:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:34.142 { 00:25:34.142 "params": { 00:25:34.142 "name": "Nvme$subsystem", 00:25:34.142 "trtype": "$TEST_TRANSPORT", 00:25:34.142 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:34.142 "adrfam": "ipv4", 00:25:34.142 "trsvcid": "$NVMF_PORT", 00:25:34.142 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:34.142 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:34.142 "hdgst": ${hdgst:-false}, 00:25:34.142 "ddgst": ${ddgst:-false} 00:25:34.142 }, 00:25:34.142 "method": "bdev_nvme_attach_controller" 00:25:34.142 } 00:25:34.142 EOF 00:25:34.142 )") 00:25:34.142 15:02:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:25:34.142 15:02:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:25:34.142 15:02:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:25:34.142 15:02:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:25:34.142 "params": { 00:25:34.142 "name": "Nvme1", 00:25:34.142 "trtype": "tcp", 00:25:34.142 "traddr": "10.0.0.2", 00:25:34.142 "adrfam": "ipv4", 00:25:34.142 "trsvcid": "4420", 00:25:34.142 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:34.142 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:34.142 "hdgst": false, 00:25:34.142 "ddgst": false 00:25:34.142 }, 00:25:34.142 "method": "bdev_nvme_attach_controller" 00:25:34.142 }' 00:25:34.142 [2024-12-11 15:02:16.838117] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:25:34.142 [2024-12-11 15:02:16.838191] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid779409 ] 00:25:34.142 [2024-12-11 15:02:16.907113] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:34.401 [2024-12-11 15:02:16.963975] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:25:34.658 Running I/O for 15 seconds... 00:25:36.965 8404.00 IOPS, 32.83 MiB/s [2024-12-11T14:02:19.999Z] 8503.50 IOPS, 33.22 MiB/s [2024-12-11T14:02:19.999Z] 15:02:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 779127 00:25:37.226 15:02:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:25:37.226 [2024-12-11 15:02:19.801041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.226 [2024-12-11 15:02:19.801092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.226 [2024-12-11 15:02:19.801122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:38392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.226 [2024-12-11 15:02:19.801139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.226 [2024-12-11 15:02:19.801156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:38400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.226 [2024-12-11 15:02:19.801170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.226 [2024-12-11 15:02:19.801185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:38408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.226 [2024-12-11 15:02:19.801199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.226 [2024-12-11 15:02:19.801214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:38416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.226 [2024-12-11 15:02:19.801229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.226 [2024-12-11 15:02:19.801254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:38424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.226 [2024-12-11 15:02:19.801283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.226 [2024-12-11 15:02:19.801299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:38432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.226 [2024-12-11 15:02:19.801312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.226 [2024-12-11 15:02:19.801340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:38440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.226 [2024-12-11 15:02:19.801353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.226 [2024-12-11 15:02:19.801367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:38448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.226 [2024-12-11 15:02:19.801380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.226 [2024-12-11 15:02:19.801394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:38456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.226 [2024-12-11 15:02:19.801407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.226 [2024-12-11 15:02:19.801420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:38464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.226 [2024-12-11 15:02:19.801433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.227 [2024-12-11 15:02:19.801446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:38472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.227 [2024-12-11 15:02:19.801458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.227 [2024-12-11 15:02:19.801472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.227 [2024-12-11 15:02:19.801485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.227 [2024-12-11 15:02:19.801499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:38488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.227 [2024-12-11 15:02:19.801512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.227 [2024-12-11 15:02:19.801526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:38496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.227 [2024-12-11 15:02:19.801565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.227 [2024-12-11 15:02:19.801583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:38504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.227 [2024-12-11 15:02:19.801598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.227 [2024-12-11 15:02:19.801613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:38512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.227 [2024-12-11 15:02:19.801628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.227 [2024-12-11 15:02:19.801645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:38520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.227 [2024-12-11 15:02:19.801665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.227 [2024-12-11 15:02:19.801682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:38528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.227 [2024-12-11 15:02:19.801696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.227 [2024-12-11 15:02:19.801711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:38536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.227 [2024-12-11 15:02:19.801725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.227 [2024-12-11 15:02:19.801741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:38544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.227 [2024-12-11 15:02:19.801755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.227 [2024-12-11 15:02:19.801770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:38552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.227 [2024-12-11 15:02:19.801784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.227 [2024-12-11 15:02:19.801799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:38560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.227 [2024-12-11 15:02:19.801813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.227 [2024-12-11 15:02:19.801829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.227 [2024-12-11 15:02:19.801857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.227 [2024-12-11 15:02:19.801871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:38576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.227 [2024-12-11 15:02:19.801884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.227 [2024-12-11 15:02:19.801899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.227 [2024-12-11 15:02:19.801913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.227 [2024-12-11 15:02:19.801926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:38592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.227 [2024-12-11 15:02:19.801939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.227 [2024-12-11 15:02:19.801953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:38600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.227 [2024-12-11 15:02:19.801966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.227 [2024-12-11 15:02:19.801980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:38608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.227 [2024-12-11 15:02:19.801993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.227 [2024-12-11 15:02:19.802007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:38616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.227 [2024-12-11 15:02:19.802020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.227 [2024-12-11 15:02:19.802034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:38624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.227 [2024-12-11 15:02:19.802068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.227 [2024-12-11 15:02:19.802084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.227 [2024-12-11 15:02:19.802097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.227 [2024-12-11 15:02:19.802110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:38640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.227 [2024-12-11 15:02:19.802123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.227 [2024-12-11 15:02:19.802143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:38648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.227 [2024-12-11 15:02:19.802156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.227 [2024-12-11 15:02:19.802170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:38656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.227 [2024-12-11 15:02:19.802182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.227 [2024-12-11 15:02:19.802196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:38664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.227 [2024-12-11 15:02:19.802208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.227 [2024-12-11 15:02:19.802222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:38672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.227 [2024-12-11 15:02:19.802234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.227 [2024-12-11 15:02:19.802247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:38680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.227 [2024-12-11 15:02:19.802260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.227 [2024-12-11 15:02:19.802273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:38688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.227 [2024-12-11 15:02:19.802285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.227 [2024-12-11 15:02:19.802299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:37752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.227 [2024-12-11 15:02:19.802311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.227 [2024-12-11 15:02:19.802325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:37760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.227 [2024-12-11 15:02:19.802338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.227 [2024-12-11 15:02:19.802352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:37768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.227 [2024-12-11 15:02:19.802365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.227 [2024-12-11 15:02:19.802378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:37776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.227 [2024-12-11 15:02:19.802391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.227 [2024-12-11 15:02:19.802408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:37784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.227 [2024-12-11 15:02:19.802421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.227 [2024-12-11 15:02:19.802435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:37792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.227 [2024-12-11 15:02:19.802448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.227 [2024-12-11 15:02:19.802461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:37800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.227 [2024-12-11 15:02:19.802473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.227 [2024-12-11 15:02:19.802487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:37808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.227 [2024-12-11 15:02:19.802499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.227 [2024-12-11 15:02:19.802513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:37816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.227 [2024-12-11 15:02:19.802526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.227 [2024-12-11 15:02:19.802564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:37824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.227 [2024-12-11 15:02:19.802579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.227 [2024-12-11 15:02:19.802600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:37832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.227 [2024-12-11 15:02:19.802614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.227 [2024-12-11 15:02:19.802630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:37840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.227 [2024-12-11 15:02:19.802644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.227 [2024-12-11 15:02:19.802660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:37848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.228 [2024-12-11 15:02:19.802674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.228 [2024-12-11 15:02:19.802689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:37856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.228 [2024-12-11 15:02:19.802704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.228 [2024-12-11 15:02:19.802719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:37864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.228 [2024-12-11 15:02:19.802733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.228 [2024-12-11 15:02:19.802748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:37872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.228 [2024-12-11 15:02:19.802762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.228 [2024-12-11 15:02:19.802778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:37880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.228 [2024-12-11 15:02:19.802796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.228 [2024-12-11 15:02:19.802811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:37888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.228 [2024-12-11 15:02:19.802826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.228 [2024-12-11 15:02:19.802855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:37896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.228 [2024-12-11 15:02:19.802869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.228 [2024-12-11 15:02:19.802883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:37904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.228 [2024-12-11 15:02:19.802896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.228 [2024-12-11 15:02:19.802925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:37912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.228 [2024-12-11 15:02:19.802938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.228 [2024-12-11 15:02:19.802951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:37920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.228 [2024-12-11 15:02:19.802963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.228 [2024-12-11 15:02:19.802977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:37928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.228 [2024-12-11 15:02:19.802989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.228 [2024-12-11 15:02:19.803002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:37936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.228 [2024-12-11 15:02:19.803015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.228 [2024-12-11 15:02:19.803029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:37944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.228 [2024-12-11 15:02:19.803041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.228 [2024-12-11 15:02:19.803055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:37952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.228 [2024-12-11 15:02:19.803067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.228 [2024-12-11 15:02:19.803085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:37960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.228 [2024-12-11 15:02:19.803098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.228 [2024-12-11 15:02:19.803112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:37968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.228 [2024-12-11 15:02:19.803124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.228 [2024-12-11 15:02:19.803138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:37976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.228 [2024-12-11 15:02:19.803150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.228 [2024-12-11 15:02:19.803167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:37984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.228 [2024-12-11 15:02:19.803180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.228 [2024-12-11 15:02:19.803194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:37992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.228 [2024-12-11 15:02:19.803207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.228 [2024-12-11 15:02:19.803220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:38696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.228 [2024-12-11 15:02:19.803232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.228 [2024-12-11 15:02:19.803246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:38000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.228 [2024-12-11 15:02:19.803258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.228 [2024-12-11 15:02:19.803272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:38008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.228 [2024-12-11 15:02:19.803284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.228 [2024-12-11 15:02:19.803297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:38016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.228 [2024-12-11 15:02:19.803309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.228 [2024-12-11 15:02:19.803323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:38024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.228 [2024-12-11 15:02:19.803335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.228 [2024-12-11 15:02:19.803349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:38032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.228 [2024-12-11 15:02:19.803361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.228 [2024-12-11 15:02:19.803374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:38040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.228 [2024-12-11 15:02:19.803386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.228 [2024-12-11 15:02:19.803400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:38048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.228 [2024-12-11 15:02:19.803412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.228 [2024-12-11 15:02:19.803425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:38056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.228 [2024-12-11 15:02:19.803437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.228 [2024-12-11 15:02:19.803450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:38064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.228 [2024-12-11 15:02:19.803463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.228 [2024-12-11 15:02:19.803477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:38072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.228 [2024-12-11 15:02:19.803493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.228 [2024-12-11 15:02:19.803511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:38080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.228 [2024-12-11 15:02:19.803539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.228 [2024-12-11 15:02:19.803565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:38088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.228 [2024-12-11 15:02:19.803580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.228 [2024-12-11 15:02:19.803596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:38096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.228 [2024-12-11 15:02:19.803610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.228 [2024-12-11 15:02:19.803625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:38104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.228 [2024-12-11 15:02:19.803639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.228 [2024-12-11 15:02:19.803655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:38112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.228 [2024-12-11 15:02:19.803669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.228 [2024-12-11 15:02:19.803684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:38120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.228 [2024-12-11 15:02:19.803698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.228 [2024-12-11 15:02:19.803713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:38704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.228 [2024-12-11 15:02:19.803728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.228 [2024-12-11 15:02:19.803744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:38712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.228 [2024-12-11 15:02:19.803758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.228 [2024-12-11 15:02:19.803773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:38720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.228 [2024-12-11 15:02:19.803787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.228 [2024-12-11 15:02:19.803803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:38728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.228 [2024-12-11 15:02:19.803817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.228 [2024-12-11 15:02:19.803846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.228 [2024-12-11 15:02:19.803860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.228 [2024-12-11 15:02:19.803874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:38744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.229 [2024-12-11 15:02:19.803887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.229 [2024-12-11 15:02:19.803916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:38752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.229 [2024-12-11 15:02:19.803932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.229 [2024-12-11 15:02:19.803946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:38760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.229 [2024-12-11 15:02:19.803959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.229 [2024-12-11 15:02:19.803972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:38768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.229 [2024-12-11 15:02:19.803985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.229 [2024-12-11 15:02:19.803998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:38128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.229 [2024-12-11 15:02:19.804011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.229 [2024-12-11 15:02:19.804029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:38136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.229 [2024-12-11 15:02:19.804042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.229 [2024-12-11 15:02:19.804056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:38144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.229 [2024-12-11 15:02:19.804068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.229 [2024-12-11 15:02:19.804082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:38152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.229 [2024-12-11 15:02:19.804095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.229 [2024-12-11 15:02:19.804108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:38160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.229 [2024-12-11 15:02:19.804120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.229 [2024-12-11 15:02:19.804134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:38168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.229 [2024-12-11 15:02:19.804146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.229 [2024-12-11 15:02:19.804160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:38176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.229 [2024-12-11 15:02:19.804172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.229 [2024-12-11 15:02:19.804186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:38184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.229 [2024-12-11 15:02:19.804199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.229 [2024-12-11 15:02:19.804212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:38192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.229 [2024-12-11 15:02:19.804225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.229 [2024-12-11 15:02:19.804238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:38200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.229 [2024-12-11 15:02:19.804251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.229 [2024-12-11 15:02:19.804268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:38208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.229 [2024-12-11 15:02:19.804281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.229 [2024-12-11 15:02:19.804294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:38216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.229 [2024-12-11 15:02:19.804307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.229 [2024-12-11 15:02:19.804320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:38224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.229 [2024-12-11 15:02:19.804333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.229 [2024-12-11 15:02:19.804346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:38232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.229 [2024-12-11 15:02:19.804359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.229 [2024-12-11 15:02:19.804372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:38240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.229 [2024-12-11 15:02:19.804385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.229 [2024-12-11 15:02:19.804399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:38248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.229 [2024-12-11 15:02:19.804411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.229 [2024-12-11 15:02:19.804425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:38256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.229 [2024-12-11 15:02:19.804437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.229 [2024-12-11 15:02:19.804451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:38264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.229 [2024-12-11 15:02:19.804463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.229 [2024-12-11 15:02:19.804477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:38272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.229 [2024-12-11 15:02:19.804489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.229 [2024-12-11 15:02:19.804502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:38280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.229 [2024-12-11 15:02:19.804515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.229 [2024-12-11 15:02:19.804551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:38288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.229 [2024-12-11 15:02:19.804567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.229 [2024-12-11 15:02:19.804583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:38296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.229 [2024-12-11 15:02:19.804597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.229 [2024-12-11 15:02:19.804612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:38304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.229 [2024-12-11 15:02:19.804630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.229 [2024-12-11 15:02:19.804646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:38312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.229 [2024-12-11 15:02:19.804660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.229 [2024-12-11 15:02:19.804675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:38320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.229 [2024-12-11 15:02:19.804688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.229 [2024-12-11 15:02:19.804703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:38328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.229 [2024-12-11 15:02:19.804717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.229 [2024-12-11 15:02:19.804732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:38336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.229 [2024-12-11 15:02:19.804745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.229 [2024-12-11 15:02:19.804760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:38344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.229 [2024-12-11 15:02:19.804774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.229 [2024-12-11 15:02:19.804789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:38352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.229 [2024-12-11 15:02:19.804803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.229 [2024-12-11 15:02:19.804818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:38360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.229 [2024-12-11 15:02:19.804845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.229 [2024-12-11 15:02:19.804859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:38368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.229 [2024-12-11 15:02:19.804872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.229 [2024-12-11 15:02:19.804885] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235c0e0 is same with the state(6) to be set 00:25:37.229 [2024-12-11 15:02:19.804901] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:37.229 [2024-12-11 15:02:19.804911] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:37.229 [2024-12-11 15:02:19.804921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:38376 len:8 PRP1 0x0 PRP2 0x0 00:25:37.229 [2024-12-11 15:02:19.804933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.229 [2024-12-11 15:02:19.808126] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.229 [2024-12-11 15:02:19.808205] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:37.229 [2024-12-11 15:02:19.808858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.229 [2024-12-11 15:02:19.808903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:37.229 [2024-12-11 15:02:19.808919] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:37.229 [2024-12-11 15:02:19.809156] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:37.230 [2024-12-11 15:02:19.809352] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.230 [2024-12-11 15:02:19.809371] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.230 [2024-12-11 15:02:19.809387] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.230 [2024-12-11 15:02:19.809402] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.230 [2024-12-11 15:02:19.821735] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.230 [2024-12-11 15:02:19.822113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.230 [2024-12-11 15:02:19.822140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:37.230 [2024-12-11 15:02:19.822156] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:37.230 [2024-12-11 15:02:19.822372] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:37.230 [2024-12-11 15:02:19.822609] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.230 [2024-12-11 15:02:19.822630] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.230 [2024-12-11 15:02:19.822643] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.230 [2024-12-11 15:02:19.822655] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.230 [2024-12-11 15:02:19.834874] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.230 [2024-12-11 15:02:19.835215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.230 [2024-12-11 15:02:19.835243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:37.230 [2024-12-11 15:02:19.835259] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:37.230 [2024-12-11 15:02:19.835483] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:37.230 [2024-12-11 15:02:19.835726] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.230 [2024-12-11 15:02:19.835747] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.230 [2024-12-11 15:02:19.835760] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.230 [2024-12-11 15:02:19.835772] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.230 [2024-12-11 15:02:19.847939] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.230 [2024-12-11 15:02:19.848348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.230 [2024-12-11 15:02:19.848376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:37.230 [2024-12-11 15:02:19.848392] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:37.230 [2024-12-11 15:02:19.848643] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:37.230 [2024-12-11 15:02:19.848858] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.230 [2024-12-11 15:02:19.848881] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.230 [2024-12-11 15:02:19.848894] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.230 [2024-12-11 15:02:19.848905] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.230 [2024-12-11 15:02:19.861144] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.230 [2024-12-11 15:02:19.861487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.230 [2024-12-11 15:02:19.861513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:37.230 [2024-12-11 15:02:19.861529] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:37.230 [2024-12-11 15:02:19.861757] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:37.230 [2024-12-11 15:02:19.861952] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.230 [2024-12-11 15:02:19.861970] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.230 [2024-12-11 15:02:19.861982] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.230 [2024-12-11 15:02:19.861993] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.230 [2024-12-11 15:02:19.874268] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.230 [2024-12-11 15:02:19.874663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.230 [2024-12-11 15:02:19.874692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:37.230 [2024-12-11 15:02:19.874707] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:37.230 [2024-12-11 15:02:19.874930] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:37.230 [2024-12-11 15:02:19.875140] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.230 [2024-12-11 15:02:19.875159] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.230 [2024-12-11 15:02:19.875171] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.230 [2024-12-11 15:02:19.875182] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.230 [2024-12-11 15:02:19.887443] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.230 [2024-12-11 15:02:19.887819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.230 [2024-12-11 15:02:19.887861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:37.230 [2024-12-11 15:02:19.887877] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:37.230 [2024-12-11 15:02:19.888147] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:37.230 [2024-12-11 15:02:19.888341] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.230 [2024-12-11 15:02:19.888359] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.230 [2024-12-11 15:02:19.888372] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.230 [2024-12-11 15:02:19.888388] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.230 [2024-12-11 15:02:19.900641] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.230 [2024-12-11 15:02:19.901047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.230 [2024-12-11 15:02:19.901075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:37.230 [2024-12-11 15:02:19.901090] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:37.230 [2024-12-11 15:02:19.901328] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:37.230 [2024-12-11 15:02:19.901564] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.230 [2024-12-11 15:02:19.901584] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.230 [2024-12-11 15:02:19.901596] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.230 [2024-12-11 15:02:19.901608] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.230 [2024-12-11 15:02:19.913745] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.230 [2024-12-11 15:02:19.914108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.230 [2024-12-11 15:02:19.914151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:37.230 [2024-12-11 15:02:19.914167] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:37.230 [2024-12-11 15:02:19.914422] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:37.230 [2024-12-11 15:02:19.914644] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.230 [2024-12-11 15:02:19.914663] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.230 [2024-12-11 15:02:19.914675] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.230 [2024-12-11 15:02:19.914686] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.230 [2024-12-11 15:02:19.926934] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.230 [2024-12-11 15:02:19.927297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.230 [2024-12-11 15:02:19.927324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:37.230 [2024-12-11 15:02:19.927339] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:37.230 [2024-12-11 15:02:19.927581] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:37.230 [2024-12-11 15:02:19.927784] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.230 [2024-12-11 15:02:19.927804] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.230 [2024-12-11 15:02:19.927816] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.230 [2024-12-11 15:02:19.927828] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.230 [2024-12-11 15:02:19.940054] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.230 [2024-12-11 15:02:19.940491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.230 [2024-12-11 15:02:19.940534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:37.230 [2024-12-11 15:02:19.940560] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:37.230 [2024-12-11 15:02:19.940820] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:37.230 [2024-12-11 15:02:19.941031] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.231 [2024-12-11 15:02:19.941050] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.231 [2024-12-11 15:02:19.941062] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.231 [2024-12-11 15:02:19.941073] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.231 [2024-12-11 15:02:19.953201] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.231 [2024-12-11 15:02:19.953606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.231 [2024-12-11 15:02:19.953633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:37.231 [2024-12-11 15:02:19.953648] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:37.231 [2024-12-11 15:02:19.953876] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:37.231 [2024-12-11 15:02:19.954084] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.231 [2024-12-11 15:02:19.954102] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.231 [2024-12-11 15:02:19.954114] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.231 [2024-12-11 15:02:19.954126] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.231 [2024-12-11 15:02:19.966472] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.231 [2024-12-11 15:02:19.966853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.231 [2024-12-11 15:02:19.966880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:37.231 [2024-12-11 15:02:19.966911] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:37.231 [2024-12-11 15:02:19.967158] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:37.231 [2024-12-11 15:02:19.967351] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.231 [2024-12-11 15:02:19.967369] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.231 [2024-12-11 15:02:19.967381] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.231 [2024-12-11 15:02:19.967393] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.231 [2024-12-11 15:02:19.979640] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.231 [2024-12-11 15:02:19.980050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.231 [2024-12-11 15:02:19.980091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:37.231 [2024-12-11 15:02:19.980108] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:37.231 [2024-12-11 15:02:19.980354] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:37.231 [2024-12-11 15:02:19.980559] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.231 [2024-12-11 15:02:19.980577] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.231 [2024-12-11 15:02:19.980589] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.231 [2024-12-11 15:02:19.980600] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.231 [2024-12-11 15:02:19.993085] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.231 [2024-12-11 15:02:19.993551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.231 [2024-12-11 15:02:19.993581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:37.231 [2024-12-11 15:02:19.993598] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:37.231 [2024-12-11 15:02:19.993832] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:37.231 [2024-12-11 15:02:19.994027] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.231 [2024-12-11 15:02:19.994045] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.231 [2024-12-11 15:02:19.994057] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.231 [2024-12-11 15:02:19.994068] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.490 [2024-12-11 15:02:20.007107] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.490 [2024-12-11 15:02:20.007552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.490 [2024-12-11 15:02:20.007584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:37.490 [2024-12-11 15:02:20.007601] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:37.490 [2024-12-11 15:02:20.007845] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:37.490 [2024-12-11 15:02:20.008046] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.490 [2024-12-11 15:02:20.008066] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.490 [2024-12-11 15:02:20.008079] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.490 [2024-12-11 15:02:20.008091] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.490 [2024-12-11 15:02:20.020404] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.490 [2024-12-11 15:02:20.020799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.490 [2024-12-11 15:02:20.020829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:37.490 [2024-12-11 15:02:20.020846] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:37.490 [2024-12-11 15:02:20.021090] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:37.490 [2024-12-11 15:02:20.021290] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.490 [2024-12-11 15:02:20.021318] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.490 [2024-12-11 15:02:20.021332] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.490 [2024-12-11 15:02:20.021345] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.490 [2024-12-11 15:02:20.033707] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.490 [2024-12-11 15:02:20.034036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.490 [2024-12-11 15:02:20.034065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:37.490 [2024-12-11 15:02:20.034083] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:37.490 [2024-12-11 15:02:20.034307] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:37.490 [2024-12-11 15:02:20.034534] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.490 [2024-12-11 15:02:20.034563] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.490 [2024-12-11 15:02:20.034577] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.491 [2024-12-11 15:02:20.034590] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.491 [2024-12-11 15:02:20.046905] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.491 [2024-12-11 15:02:20.047276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.491 [2024-12-11 15:02:20.047304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:37.491 [2024-12-11 15:02:20.047320] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:37.491 [2024-12-11 15:02:20.047567] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:37.491 [2024-12-11 15:02:20.047778] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.491 [2024-12-11 15:02:20.047797] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.491 [2024-12-11 15:02:20.047809] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.491 [2024-12-11 15:02:20.047820] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.491 [2024-12-11 15:02:20.060299] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.491 [2024-12-11 15:02:20.060673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.491 [2024-12-11 15:02:20.060702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:37.491 [2024-12-11 15:02:20.060719] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:37.491 [2024-12-11 15:02:20.060950] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:37.491 [2024-12-11 15:02:20.061192] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.491 [2024-12-11 15:02:20.061228] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.491 [2024-12-11 15:02:20.061242] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.491 [2024-12-11 15:02:20.061260] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.491 [2024-12-11 15:02:20.074356] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.491 [2024-12-11 15:02:20.074693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.491 [2024-12-11 15:02:20.074722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:37.491 [2024-12-11 15:02:20.074738] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:37.491 [2024-12-11 15:02:20.074984] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:37.491 [2024-12-11 15:02:20.075204] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.491 [2024-12-11 15:02:20.075225] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.491 [2024-12-11 15:02:20.075239] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.491 [2024-12-11 15:02:20.075252] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.491 [2024-12-11 15:02:20.087852] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.491 [2024-12-11 15:02:20.088220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.491 [2024-12-11 15:02:20.088248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:37.491 [2024-12-11 15:02:20.088264] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:37.491 [2024-12-11 15:02:20.088494] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:37.491 [2024-12-11 15:02:20.088737] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.491 [2024-12-11 15:02:20.088758] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.491 [2024-12-11 15:02:20.088771] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.491 [2024-12-11 15:02:20.088783] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.491 [2024-12-11 15:02:20.101220] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.491 [2024-12-11 15:02:20.101578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.491 [2024-12-11 15:02:20.101607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:37.491 [2024-12-11 15:02:20.101624] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:37.491 [2024-12-11 15:02:20.101855] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:37.491 [2024-12-11 15:02:20.102065] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.491 [2024-12-11 15:02:20.102083] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.491 [2024-12-11 15:02:20.102095] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.491 [2024-12-11 15:02:20.102106] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.491 [2024-12-11 15:02:20.114751] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.491 [2024-12-11 15:02:20.115126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.491 [2024-12-11 15:02:20.115169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:37.491 [2024-12-11 15:02:20.115185] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:37.491 [2024-12-11 15:02:20.115453] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:37.491 [2024-12-11 15:02:20.115670] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.491 [2024-12-11 15:02:20.115691] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.491 [2024-12-11 15:02:20.115703] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.491 [2024-12-11 15:02:20.115715] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.491 [2024-12-11 15:02:20.128006] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.491 [2024-12-11 15:02:20.128397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.491 [2024-12-11 15:02:20.128426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:37.491 [2024-12-11 15:02:20.128442] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:37.491 [2024-12-11 15:02:20.128685] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:37.491 [2024-12-11 15:02:20.128895] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.491 [2024-12-11 15:02:20.128914] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.491 [2024-12-11 15:02:20.128926] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.491 [2024-12-11 15:02:20.128938] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.491 [2024-12-11 15:02:20.141292] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.491 [2024-12-11 15:02:20.141661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.491 [2024-12-11 15:02:20.141704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:37.491 [2024-12-11 15:02:20.141720] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:37.491 [2024-12-11 15:02:20.141990] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:37.491 [2024-12-11 15:02:20.142184] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.491 [2024-12-11 15:02:20.142202] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.491 [2024-12-11 15:02:20.142214] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.491 [2024-12-11 15:02:20.142226] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.491 [2024-12-11 15:02:20.154667] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.491 [2024-12-11 15:02:20.155098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.491 [2024-12-11 15:02:20.155140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:37.491 [2024-12-11 15:02:20.155157] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:37.491 [2024-12-11 15:02:20.155404] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:37.491 [2024-12-11 15:02:20.155638] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.491 [2024-12-11 15:02:20.155659] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.491 [2024-12-11 15:02:20.155671] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.491 [2024-12-11 15:02:20.155684] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.491 [2024-12-11 15:02:20.167914] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.491 [2024-12-11 15:02:20.168285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.491 [2024-12-11 15:02:20.168328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:37.491 [2024-12-11 15:02:20.168343] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:37.491 [2024-12-11 15:02:20.168620] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:37.491 [2024-12-11 15:02:20.168815] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.491 [2024-12-11 15:02:20.168833] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.491 [2024-12-11 15:02:20.168845] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.491 [2024-12-11 15:02:20.168856] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.491 [2024-12-11 15:02:20.181089] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.492 [2024-12-11 15:02:20.181432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.492 [2024-12-11 15:02:20.181460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:37.492 [2024-12-11 15:02:20.181475] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:37.492 [2024-12-11 15:02:20.181709] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:37.492 [2024-12-11 15:02:20.181920] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.492 [2024-12-11 15:02:20.181939] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.492 [2024-12-11 15:02:20.181951] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.492 [2024-12-11 15:02:20.181962] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.492 [2024-12-11 15:02:20.194326] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.492 [2024-12-11 15:02:20.194725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.492 [2024-12-11 15:02:20.194768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:37.492 [2024-12-11 15:02:20.194783] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:37.492 [2024-12-11 15:02:20.195016] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:37.492 [2024-12-11 15:02:20.195210] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.492 [2024-12-11 15:02:20.195233] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.492 [2024-12-11 15:02:20.195245] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.492 [2024-12-11 15:02:20.195256] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.492 [2024-12-11 15:02:20.207551] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.492 [2024-12-11 15:02:20.207922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.492 [2024-12-11 15:02:20.207949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:37.492 [2024-12-11 15:02:20.207964] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:37.492 [2024-12-11 15:02:20.208185] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:37.492 [2024-12-11 15:02:20.208409] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.492 [2024-12-11 15:02:20.208443] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.492 [2024-12-11 15:02:20.208456] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.492 [2024-12-11 15:02:20.208468] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.492 [2024-12-11 15:02:20.220971] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.492 [2024-12-11 15:02:20.221334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.492 [2024-12-11 15:02:20.221377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:37.492 [2024-12-11 15:02:20.221394] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:37.492 [2024-12-11 15:02:20.221664] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:37.492 [2024-12-11 15:02:20.221900] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.492 [2024-12-11 15:02:20.221918] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.492 [2024-12-11 15:02:20.221930] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.492 [2024-12-11 15:02:20.221941] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.492 [2024-12-11 15:02:20.234433] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.492 [2024-12-11 15:02:20.234795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.492 [2024-12-11 15:02:20.234839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:37.492 [2024-12-11 15:02:20.234857] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:37.492 [2024-12-11 15:02:20.235083] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:37.492 [2024-12-11 15:02:20.235294] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.492 [2024-12-11 15:02:20.235313] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.492 [2024-12-11 15:02:20.235325] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.492 [2024-12-11 15:02:20.235341] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.492 [2024-12-11 15:02:20.248286] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.492 [2024-12-11 15:02:20.248669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.492 [2024-12-11 15:02:20.248697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:37.492 [2024-12-11 15:02:20.248713] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:37.492 [2024-12-11 15:02:20.248946] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:37.492 [2024-12-11 15:02:20.249162] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.492 [2024-12-11 15:02:20.249181] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.492 [2024-12-11 15:02:20.249193] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.492 [2024-12-11 15:02:20.249205] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.752 [2024-12-11 15:02:20.262218] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.752 [2024-12-11 15:02:20.262643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.752 [2024-12-11 15:02:20.262672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:37.752 [2024-12-11 15:02:20.262689] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:37.752 [2024-12-11 15:02:20.262920] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:37.752 [2024-12-11 15:02:20.263136] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.752 [2024-12-11 15:02:20.263155] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.752 [2024-12-11 15:02:20.263167] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.752 [2024-12-11 15:02:20.263178] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.752 [2024-12-11 15:02:20.275655] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.752 [2024-12-11 15:02:20.276049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.752 [2024-12-11 15:02:20.276076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:37.752 [2024-12-11 15:02:20.276108] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:37.752 [2024-12-11 15:02:20.276350] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:37.752 [2024-12-11 15:02:20.276581] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.752 [2024-12-11 15:02:20.276612] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.752 [2024-12-11 15:02:20.276626] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.752 [2024-12-11 15:02:20.276639] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.752 [2024-12-11 15:02:20.289201] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.752 [2024-12-11 15:02:20.289662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.752 [2024-12-11 15:02:20.289693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:37.752 [2024-12-11 15:02:20.289709] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:37.752 [2024-12-11 15:02:20.289940] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:37.752 [2024-12-11 15:02:20.290149] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.752 [2024-12-11 15:02:20.290167] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.752 [2024-12-11 15:02:20.290179] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.752 [2024-12-11 15:02:20.290191] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.752 [2024-12-11 15:02:20.302764] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.752 [2024-12-11 15:02:20.303178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.752 [2024-12-11 15:02:20.303220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:37.752 [2024-12-11 15:02:20.303237] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:37.752 [2024-12-11 15:02:20.303482] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:37.752 [2024-12-11 15:02:20.303704] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.752 [2024-12-11 15:02:20.303723] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.752 [2024-12-11 15:02:20.303735] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.752 [2024-12-11 15:02:20.303746] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.752 [2024-12-11 15:02:20.316350] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.752 [2024-12-11 15:02:20.316727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.752 [2024-12-11 15:02:20.316756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:37.752 [2024-12-11 15:02:20.316772] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:37.752 [2024-12-11 15:02:20.317003] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:37.752 [2024-12-11 15:02:20.317262] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.752 [2024-12-11 15:02:20.317283] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.752 [2024-12-11 15:02:20.317297] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.752 [2024-12-11 15:02:20.317310] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.752 [2024-12-11 15:02:20.329842] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.752 [2024-12-11 15:02:20.330202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.752 [2024-12-11 15:02:20.330245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:37.752 [2024-12-11 15:02:20.330260] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:37.752 [2024-12-11 15:02:20.330534] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:37.752 [2024-12-11 15:02:20.330776] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.752 [2024-12-11 15:02:20.330797] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.752 [2024-12-11 15:02:20.330811] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.752 [2024-12-11 15:02:20.330822] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.752 7034.33 IOPS, 27.48 MiB/s [2024-12-11T14:02:20.525Z] [2024-12-11 15:02:20.343171] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.752 [2024-12-11 15:02:20.343551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.752 [2024-12-11 15:02:20.343580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:37.752 [2024-12-11 15:02:20.343597] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:37.752 [2024-12-11 15:02:20.343831] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:37.752 [2024-12-11 15:02:20.344048] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.752 [2024-12-11 15:02:20.344067] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.753 [2024-12-11 15:02:20.344080] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.753 [2024-12-11 15:02:20.344091] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.753 [2024-12-11 15:02:20.356405] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.753 [2024-12-11 15:02:20.356779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.753 [2024-12-11 15:02:20.356821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:37.753 [2024-12-11 15:02:20.356837] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:37.753 [2024-12-11 15:02:20.357087] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:37.753 [2024-12-11 15:02:20.357281] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.753 [2024-12-11 15:02:20.357299] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.753 [2024-12-11 15:02:20.357311] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.753 [2024-12-11 15:02:20.357322] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.753 [2024-12-11 15:02:20.369715] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.753 [2024-12-11 15:02:20.370077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.753 [2024-12-11 15:02:20.370119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:37.753 [2024-12-11 15:02:20.370134] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:37.753 [2024-12-11 15:02:20.370382] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:37.753 [2024-12-11 15:02:20.370589] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.753 [2024-12-11 15:02:20.370613] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.753 [2024-12-11 15:02:20.370625] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.753 [2024-12-11 15:02:20.370637] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.753 [2024-12-11 15:02:20.383101] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.753 [2024-12-11 15:02:20.383444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.753 [2024-12-11 15:02:20.383488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:37.753 [2024-12-11 15:02:20.383504] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:37.753 [2024-12-11 15:02:20.383747] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:37.753 [2024-12-11 15:02:20.383970] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.753 [2024-12-11 15:02:20.383990] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.753 [2024-12-11 15:02:20.384002] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.753 [2024-12-11 15:02:20.384014] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.753 [2024-12-11 15:02:20.396311] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.753 [2024-12-11 15:02:20.396688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.753 [2024-12-11 15:02:20.396716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:37.753 [2024-12-11 15:02:20.396732] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:37.753 [2024-12-11 15:02:20.396978] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:37.753 [2024-12-11 15:02:20.397172] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.753 [2024-12-11 15:02:20.397190] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.753 [2024-12-11 15:02:20.397202] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.753 [2024-12-11 15:02:20.397213] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.753 [2024-12-11 15:02:20.409633] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.753 [2024-12-11 15:02:20.409972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.753 [2024-12-11 15:02:20.409999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:37.753 [2024-12-11 15:02:20.410014] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:37.753 [2024-12-11 15:02:20.410232] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:37.753 [2024-12-11 15:02:20.410441] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.753 [2024-12-11 15:02:20.410459] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.753 [2024-12-11 15:02:20.410471] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.753 [2024-12-11 15:02:20.410488] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.753 [2024-12-11 15:02:20.422623] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.753 [2024-12-11 15:02:20.423011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.753 [2024-12-11 15:02:20.423037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:37.753 [2024-12-11 15:02:20.423052] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:37.753 [2024-12-11 15:02:20.423270] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:37.753 [2024-12-11 15:02:20.423480] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.753 [2024-12-11 15:02:20.423498] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.753 [2024-12-11 15:02:20.423510] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.753 [2024-12-11 15:02:20.423521] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.753 [2024-12-11 15:02:20.435845] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.753 [2024-12-11 15:02:20.436172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.753 [2024-12-11 15:02:20.436199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:37.753 [2024-12-11 15:02:20.436214] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:37.753 [2024-12-11 15:02:20.436432] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:37.753 [2024-12-11 15:02:20.436671] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.753 [2024-12-11 15:02:20.436691] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.753 [2024-12-11 15:02:20.436703] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.753 [2024-12-11 15:02:20.436715] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.753 [2024-12-11 15:02:20.448978] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.753 [2024-12-11 15:02:20.449346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.753 [2024-12-11 15:02:20.449388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:37.753 [2024-12-11 15:02:20.449404] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:37.753 [2024-12-11 15:02:20.449686] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:37.753 [2024-12-11 15:02:20.449881] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.753 [2024-12-11 15:02:20.449899] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.753 [2024-12-11 15:02:20.449911] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.753 [2024-12-11 15:02:20.449922] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.753 [2024-12-11 15:02:20.462086] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.753 [2024-12-11 15:02:20.462583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.754 [2024-12-11 15:02:20.462610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:37.754 [2024-12-11 15:02:20.462640] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:37.754 [2024-12-11 15:02:20.462887] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:37.754 [2024-12-11 15:02:20.463081] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.754 [2024-12-11 15:02:20.463099] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.754 [2024-12-11 15:02:20.463111] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.754 [2024-12-11 15:02:20.463123] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.754 [2024-12-11 15:02:20.475233] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.754 [2024-12-11 15:02:20.475600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.754 [2024-12-11 15:02:20.475628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:37.754 [2024-12-11 15:02:20.475645] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:37.754 [2024-12-11 15:02:20.475886] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:37.754 [2024-12-11 15:02:20.476095] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.754 [2024-12-11 15:02:20.476114] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.754 [2024-12-11 15:02:20.476125] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.754 [2024-12-11 15:02:20.476137] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.754 [2024-12-11 15:02:20.488389] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.754 [2024-12-11 15:02:20.488722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.754 [2024-12-11 15:02:20.488749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:37.754 [2024-12-11 15:02:20.488764] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:37.754 [2024-12-11 15:02:20.488967] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:37.754 [2024-12-11 15:02:20.489193] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.754 [2024-12-11 15:02:20.489212] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.754 [2024-12-11 15:02:20.489223] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.754 [2024-12-11 15:02:20.489234] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.754 [2024-12-11 15:02:20.501684] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.754 [2024-12-11 15:02:20.502037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.754 [2024-12-11 15:02:20.502064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:37.754 [2024-12-11 15:02:20.502080] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:37.754 [2024-12-11 15:02:20.502321] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:37.754 [2024-12-11 15:02:20.502558] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.754 [2024-12-11 15:02:20.502578] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.754 [2024-12-11 15:02:20.502591] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.754 [2024-12-11 15:02:20.502602] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:37.754 [2024-12-11 15:02:20.514876] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:37.754 [2024-12-11 15:02:20.515242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.754 [2024-12-11 15:02:20.515270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:37.754 [2024-12-11 15:02:20.515284] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:37.754 [2024-12-11 15:02:20.515501] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:37.754 [2024-12-11 15:02:20.515742] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:37.754 [2024-12-11 15:02:20.515762] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:37.754 [2024-12-11 15:02:20.515775] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:37.754 [2024-12-11 15:02:20.515786] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.013 [2024-12-11 15:02:20.528357] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.013 [2024-12-11 15:02:20.528762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.013 [2024-12-11 15:02:20.528792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:38.013 [2024-12-11 15:02:20.528809] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:38.013 [2024-12-11 15:02:20.529052] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:38.013 [2024-12-11 15:02:20.529267] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.013 [2024-12-11 15:02:20.529286] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.013 [2024-12-11 15:02:20.529299] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.013 [2024-12-11 15:02:20.529310] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.013 [2024-12-11 15:02:20.541550] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.013 [2024-12-11 15:02:20.541948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.013 [2024-12-11 15:02:20.542001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:38.013 [2024-12-11 15:02:20.542017] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:38.013 [2024-12-11 15:02:20.542263] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:38.013 [2024-12-11 15:02:20.542457] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.013 [2024-12-11 15:02:20.542480] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.013 [2024-12-11 15:02:20.542493] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.013 [2024-12-11 15:02:20.542505] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.013 [2024-12-11 15:02:20.554667] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.013 [2024-12-11 15:02:20.555090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.013 [2024-12-11 15:02:20.555143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:38.013 [2024-12-11 15:02:20.555158] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:38.013 [2024-12-11 15:02:20.555417] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:38.013 [2024-12-11 15:02:20.555621] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.013 [2024-12-11 15:02:20.555640] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.013 [2024-12-11 15:02:20.555652] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.013 [2024-12-11 15:02:20.555664] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.013 [2024-12-11 15:02:20.567807] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.013 [2024-12-11 15:02:20.568246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.013 [2024-12-11 15:02:20.568297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:38.013 [2024-12-11 15:02:20.568313] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:38.013 [2024-12-11 15:02:20.568590] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:38.013 [2024-12-11 15:02:20.568797] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.013 [2024-12-11 15:02:20.568816] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.013 [2024-12-11 15:02:20.568829] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.013 [2024-12-11 15:02:20.568841] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.013 [2024-12-11 15:02:20.581215] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.013 [2024-12-11 15:02:20.581617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.014 [2024-12-11 15:02:20.581646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:38.014 [2024-12-11 15:02:20.581662] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:38.014 [2024-12-11 15:02:20.581892] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:38.014 [2024-12-11 15:02:20.582103] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.014 [2024-12-11 15:02:20.582121] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.014 [2024-12-11 15:02:20.582133] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.014 [2024-12-11 15:02:20.582149] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.014 [2024-12-11 15:02:20.594368] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.014 [2024-12-11 15:02:20.594723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.014 [2024-12-11 15:02:20.594786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:38.014 [2024-12-11 15:02:20.594800] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:38.014 [2024-12-11 15:02:20.595014] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:38.014 [2024-12-11 15:02:20.595208] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.014 [2024-12-11 15:02:20.595226] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.014 [2024-12-11 15:02:20.595238] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.014 [2024-12-11 15:02:20.595250] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.014 [2024-12-11 15:02:20.607532] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.014 [2024-12-11 15:02:20.607965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.014 [2024-12-11 15:02:20.608031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:38.014 [2024-12-11 15:02:20.608047] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:38.014 [2024-12-11 15:02:20.608305] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:38.014 [2024-12-11 15:02:20.608516] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.014 [2024-12-11 15:02:20.608559] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.014 [2024-12-11 15:02:20.608574] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.014 [2024-12-11 15:02:20.608586] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.014 [2024-12-11 15:02:20.620703] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.014 [2024-12-11 15:02:20.621055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.014 [2024-12-11 15:02:20.621081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:38.014 [2024-12-11 15:02:20.621096] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:38.014 [2024-12-11 15:02:20.621311] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:38.014 [2024-12-11 15:02:20.621521] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.014 [2024-12-11 15:02:20.621539] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.014 [2024-12-11 15:02:20.621559] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.014 [2024-12-11 15:02:20.621572] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.014 [2024-12-11 15:02:20.633898] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.014 [2024-12-11 15:02:20.634267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.014 [2024-12-11 15:02:20.634330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:38.014 [2024-12-11 15:02:20.634373] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:38.014 [2024-12-11 15:02:20.634637] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:38.014 [2024-12-11 15:02:20.634833] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.014 [2024-12-11 15:02:20.634851] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.014 [2024-12-11 15:02:20.634863] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.014 [2024-12-11 15:02:20.634874] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.014 [2024-12-11 15:02:20.647117] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.014 [2024-12-11 15:02:20.647519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.014 [2024-12-11 15:02:20.647553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:38.014 [2024-12-11 15:02:20.647585] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:38.014 [2024-12-11 15:02:20.647802] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:38.014 [2024-12-11 15:02:20.648011] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.014 [2024-12-11 15:02:20.648029] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.014 [2024-12-11 15:02:20.648041] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.014 [2024-12-11 15:02:20.648052] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.014 [2024-12-11 15:02:20.660296] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.014 [2024-12-11 15:02:20.660712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.014 [2024-12-11 15:02:20.660741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:38.014 [2024-12-11 15:02:20.660757] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:38.014 [2024-12-11 15:02:20.661001] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:38.014 [2024-12-11 15:02:20.661212] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.014 [2024-12-11 15:02:20.661230] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.014 [2024-12-11 15:02:20.661242] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.014 [2024-12-11 15:02:20.661254] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.014 [2024-12-11 15:02:20.673336] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.014 [2024-12-11 15:02:20.673753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.014 [2024-12-11 15:02:20.673795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:38.014 [2024-12-11 15:02:20.673811] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:38.014 [2024-12-11 15:02:20.674053] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:38.014 [2024-12-11 15:02:20.674248] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.014 [2024-12-11 15:02:20.674266] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.014 [2024-12-11 15:02:20.674277] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.014 [2024-12-11 15:02:20.674289] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.014 [2024-12-11 15:02:20.686536] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.014 [2024-12-11 15:02:20.686867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.014 [2024-12-11 15:02:20.686893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:38.014 [2024-12-11 15:02:20.686909] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:38.014 [2024-12-11 15:02:20.687164] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:38.014 [2024-12-11 15:02:20.687378] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.014 [2024-12-11 15:02:20.687397] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.014 [2024-12-11 15:02:20.687408] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.014 [2024-12-11 15:02:20.687420] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.014 [2024-12-11 15:02:20.699657] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.014 [2024-12-11 15:02:20.700002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.014 [2024-12-11 15:02:20.700030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:38.014 [2024-12-11 15:02:20.700045] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:38.014 [2024-12-11 15:02:20.700269] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:38.014 [2024-12-11 15:02:20.700478] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.014 [2024-12-11 15:02:20.700496] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.014 [2024-12-11 15:02:20.700508] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.014 [2024-12-11 15:02:20.700519] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.014 [2024-12-11 15:02:20.712812] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.014 [2024-12-11 15:02:20.713252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.014 [2024-12-11 15:02:20.713293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:38.015 [2024-12-11 15:02:20.713309] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:38.015 [2024-12-11 15:02:20.713567] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:38.015 [2024-12-11 15:02:20.713761] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.015 [2024-12-11 15:02:20.713785] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.015 [2024-12-11 15:02:20.713798] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.015 [2024-12-11 15:02:20.713809] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.015 [2024-12-11 15:02:20.725807] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.015 [2024-12-11 15:02:20.726128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.015 [2024-12-11 15:02:20.726154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:38.015 [2024-12-11 15:02:20.726169] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:38.015 [2024-12-11 15:02:20.726386] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:38.015 [2024-12-11 15:02:20.726606] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.015 [2024-12-11 15:02:20.726625] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.015 [2024-12-11 15:02:20.726637] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.015 [2024-12-11 15:02:20.726649] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.015 [2024-12-11 15:02:20.739053] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.015 [2024-12-11 15:02:20.739418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.015 [2024-12-11 15:02:20.739445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:38.015 [2024-12-11 15:02:20.739461] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:38.015 [2024-12-11 15:02:20.739709] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:38.015 [2024-12-11 15:02:20.739903] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.015 [2024-12-11 15:02:20.739921] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.015 [2024-12-11 15:02:20.739933] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.015 [2024-12-11 15:02:20.739944] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.015 [2024-12-11 15:02:20.752208] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.015 [2024-12-11 15:02:20.752541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.015 [2024-12-11 15:02:20.752575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:38.015 [2024-12-11 15:02:20.752591] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:38.015 [2024-12-11 15:02:20.752807] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:38.015 [2024-12-11 15:02:20.753017] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.015 [2024-12-11 15:02:20.753036] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.015 [2024-12-11 15:02:20.753048] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.015 [2024-12-11 15:02:20.753063] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.015 [2024-12-11 15:02:20.765235] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.015 [2024-12-11 15:02:20.765605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.015 [2024-12-11 15:02:20.765648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:38.015 [2024-12-11 15:02:20.765664] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:38.015 [2024-12-11 15:02:20.765934] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:38.015 [2024-12-11 15:02:20.766129] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.015 [2024-12-11 15:02:20.766147] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.015 [2024-12-11 15:02:20.766159] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.015 [2024-12-11 15:02:20.766170] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.015 [2024-12-11 15:02:20.778438] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.015 [2024-12-11 15:02:20.778864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.015 [2024-12-11 15:02:20.778906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:38.015 [2024-12-11 15:02:20.778922] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:38.015 [2024-12-11 15:02:20.779158] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:38.015 [2024-12-11 15:02:20.779391] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.015 [2024-12-11 15:02:20.779412] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.015 [2024-12-11 15:02:20.779427] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.015 [2024-12-11 15:02:20.779446] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.274 [2024-12-11 15:02:20.791948] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.274 [2024-12-11 15:02:20.792276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.274 [2024-12-11 15:02:20.792304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:38.274 [2024-12-11 15:02:20.792320] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:38.274 [2024-12-11 15:02:20.792537] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:38.274 [2024-12-11 15:02:20.792749] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.274 [2024-12-11 15:02:20.792768] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.274 [2024-12-11 15:02:20.792781] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.274 [2024-12-11 15:02:20.792793] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.274 [2024-12-11 15:02:20.805081] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.274 [2024-12-11 15:02:20.805462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.274 [2024-12-11 15:02:20.805491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:38.274 [2024-12-11 15:02:20.805507] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:38.274 [2024-12-11 15:02:20.805761] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:38.274 [2024-12-11 15:02:20.805988] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.274 [2024-12-11 15:02:20.806006] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.274 [2024-12-11 15:02:20.806018] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.274 [2024-12-11 15:02:20.806030] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.274 [2024-12-11 15:02:20.818305] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.274 [2024-12-11 15:02:20.818695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.274 [2024-12-11 15:02:20.818724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:38.274 [2024-12-11 15:02:20.818741] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:38.274 [2024-12-11 15:02:20.818984] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:38.274 [2024-12-11 15:02:20.819205] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.274 [2024-12-11 15:02:20.819225] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.274 [2024-12-11 15:02:20.819238] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.274 [2024-12-11 15:02:20.819250] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.274 [2024-12-11 15:02:20.831736] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.274 [2024-12-11 15:02:20.832042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.274 [2024-12-11 15:02:20.832082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:38.274 [2024-12-11 15:02:20.832098] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:38.274 [2024-12-11 15:02:20.832314] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:38.274 [2024-12-11 15:02:20.832538] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.274 [2024-12-11 15:02:20.832566] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.274 [2024-12-11 15:02:20.832579] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.274 [2024-12-11 15:02:20.832591] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.274 [2024-12-11 15:02:20.845079] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.274 [2024-12-11 15:02:20.845416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.274 [2024-12-11 15:02:20.845444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:38.275 [2024-12-11 15:02:20.845459] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:38.275 [2024-12-11 15:02:20.845698] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:38.275 [2024-12-11 15:02:20.845915] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.275 [2024-12-11 15:02:20.845933] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.275 [2024-12-11 15:02:20.845945] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.275 [2024-12-11 15:02:20.845956] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.275 [2024-12-11 15:02:20.858114] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.275 [2024-12-11 15:02:20.858525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.275 [2024-12-11 15:02:20.858559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:38.275 [2024-12-11 15:02:20.858577] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:38.275 [2024-12-11 15:02:20.858819] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:38.275 [2024-12-11 15:02:20.859029] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.275 [2024-12-11 15:02:20.859047] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.275 [2024-12-11 15:02:20.859059] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.275 [2024-12-11 15:02:20.859070] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.275 [2024-12-11 15:02:20.871242] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.275 [2024-12-11 15:02:20.871734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.275 [2024-12-11 15:02:20.871775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:38.275 [2024-12-11 15:02:20.871791] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:38.275 [2024-12-11 15:02:20.872038] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:38.275 [2024-12-11 15:02:20.872232] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.275 [2024-12-11 15:02:20.872250] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.275 [2024-12-11 15:02:20.872262] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.275 [2024-12-11 15:02:20.872273] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.275 [2024-12-11 15:02:20.884297] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.275 [2024-12-11 15:02:20.884725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.275 [2024-12-11 15:02:20.884753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:38.275 [2024-12-11 15:02:20.884769] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:38.275 [2024-12-11 15:02:20.885012] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:38.275 [2024-12-11 15:02:20.885221] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.275 [2024-12-11 15:02:20.885245] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.275 [2024-12-11 15:02:20.885257] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.275 [2024-12-11 15:02:20.885268] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.275 [2024-12-11 15:02:20.897564] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.275 [2024-12-11 15:02:20.897853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.275 [2024-12-11 15:02:20.897894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:38.275 [2024-12-11 15:02:20.897909] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:38.275 [2024-12-11 15:02:20.898111] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:38.275 [2024-12-11 15:02:20.898337] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.275 [2024-12-11 15:02:20.898355] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.275 [2024-12-11 15:02:20.898367] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.275 [2024-12-11 15:02:20.898379] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.275 [2024-12-11 15:02:20.910631] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.275 [2024-12-11 15:02:20.911008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.275 [2024-12-11 15:02:20.911035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:38.275 [2024-12-11 15:02:20.911050] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:38.275 [2024-12-11 15:02:20.911289] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:38.275 [2024-12-11 15:02:20.911499] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.275 [2024-12-11 15:02:20.911517] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.275 [2024-12-11 15:02:20.911555] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.275 [2024-12-11 15:02:20.911570] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.275 [2024-12-11 15:02:20.923856] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.275 [2024-12-11 15:02:20.924284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.275 [2024-12-11 15:02:20.924326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:38.275 [2024-12-11 15:02:20.924343] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:38.275 [2024-12-11 15:02:20.924597] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:38.275 [2024-12-11 15:02:20.924806] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.275 [2024-12-11 15:02:20.924824] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.275 [2024-12-11 15:02:20.924836] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.275 [2024-12-11 15:02:20.924851] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.275 [2024-12-11 15:02:20.937057] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.275 [2024-12-11 15:02:20.937421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.275 [2024-12-11 15:02:20.937448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:38.275 [2024-12-11 15:02:20.937463] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:38.275 [2024-12-11 15:02:20.937711] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:38.275 [2024-12-11 15:02:20.937922] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.275 [2024-12-11 15:02:20.937940] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.275 [2024-12-11 15:02:20.937952] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.275 [2024-12-11 15:02:20.937963] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.275 [2024-12-11 15:02:20.950237] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.275 [2024-12-11 15:02:20.950558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.275 [2024-12-11 15:02:20.950585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:38.275 [2024-12-11 15:02:20.950600] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:38.275 [2024-12-11 15:02:20.950802] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:38.275 [2024-12-11 15:02:20.951045] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.275 [2024-12-11 15:02:20.951064] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.275 [2024-12-11 15:02:20.951076] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.275 [2024-12-11 15:02:20.951088] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.275 [2024-12-11 15:02:20.963249] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.275 [2024-12-11 15:02:20.963676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.275 [2024-12-11 15:02:20.963703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:38.275 [2024-12-11 15:02:20.963735] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:38.275 [2024-12-11 15:02:20.963977] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:38.275 [2024-12-11 15:02:20.964171] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.275 [2024-12-11 15:02:20.964189] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.275 [2024-12-11 15:02:20.964202] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.275 [2024-12-11 15:02:20.964213] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.275 [2024-12-11 15:02:20.976457] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.275 [2024-12-11 15:02:20.976803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.275 [2024-12-11 15:02:20.976831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:38.275 [2024-12-11 15:02:20.976847] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:38.275 [2024-12-11 15:02:20.977070] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:38.276 [2024-12-11 15:02:20.977281] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.276 [2024-12-11 15:02:20.977299] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.276 [2024-12-11 15:02:20.977311] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.276 [2024-12-11 15:02:20.977322] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.276 [2024-12-11 15:02:20.989607] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.276 [2024-12-11 15:02:20.989992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.276 [2024-12-11 15:02:20.990032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:38.276 [2024-12-11 15:02:20.990048] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:38.276 [2024-12-11 15:02:20.990286] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:38.276 [2024-12-11 15:02:20.990481] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.276 [2024-12-11 15:02:20.990498] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.276 [2024-12-11 15:02:20.990510] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.276 [2024-12-11 15:02:20.990521] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.276 [2024-12-11 15:02:21.002611] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.276 [2024-12-11 15:02:21.003037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.276 [2024-12-11 15:02:21.003065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:38.276 [2024-12-11 15:02:21.003080] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:38.276 [2024-12-11 15:02:21.003315] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:38.276 [2024-12-11 15:02:21.003538] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.276 [2024-12-11 15:02:21.003568] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.276 [2024-12-11 15:02:21.003581] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.276 [2024-12-11 15:02:21.003593] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.276 [2024-12-11 15:02:21.015760] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.276 [2024-12-11 15:02:21.016136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.276 [2024-12-11 15:02:21.016178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:38.276 [2024-12-11 15:02:21.016193] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:38.276 [2024-12-11 15:02:21.016460] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:38.276 [2024-12-11 15:02:21.016666] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.276 [2024-12-11 15:02:21.016686] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.276 [2024-12-11 15:02:21.016698] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.276 [2024-12-11 15:02:21.016709] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.276 [2024-12-11 15:02:21.028883] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.276 [2024-12-11 15:02:21.029248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.276 [2024-12-11 15:02:21.029289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:38.276 [2024-12-11 15:02:21.029305] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:38.276 [2024-12-11 15:02:21.029581] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:38.276 [2024-12-11 15:02:21.029777] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.276 [2024-12-11 15:02:21.029795] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.276 [2024-12-11 15:02:21.029807] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.276 [2024-12-11 15:02:21.029819] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.276 [2024-12-11 15:02:21.042217] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.276 [2024-12-11 15:02:21.042557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.276 [2024-12-11 15:02:21.042602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:38.276 [2024-12-11 15:02:21.042619] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:38.276 [2024-12-11 15:02:21.042841] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:38.276 [2024-12-11 15:02:21.043113] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.276 [2024-12-11 15:02:21.043132] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.276 [2024-12-11 15:02:21.043144] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.276 [2024-12-11 15:02:21.043156] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.535 [2024-12-11 15:02:21.055393] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.535 [2024-12-11 15:02:21.055849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.535 [2024-12-11 15:02:21.055893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:38.535 [2024-12-11 15:02:21.055910] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:38.535 [2024-12-11 15:02:21.056178] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:38.535 [2024-12-11 15:02:21.056373] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.535 [2024-12-11 15:02:21.056395] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.535 [2024-12-11 15:02:21.056408] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.535 [2024-12-11 15:02:21.056419] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.535 [2024-12-11 15:02:21.068538] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.535 [2024-12-11 15:02:21.069000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.535 [2024-12-11 15:02:21.069029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:38.535 [2024-12-11 15:02:21.069045] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:38.535 [2024-12-11 15:02:21.069280] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:38.535 [2024-12-11 15:02:21.069503] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.535 [2024-12-11 15:02:21.069522] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.535 [2024-12-11 15:02:21.069535] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.535 [2024-12-11 15:02:21.069557] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.536 [2024-12-11 15:02:21.081974] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.536 [2024-12-11 15:02:21.082337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.536 [2024-12-11 15:02:21.082365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:38.536 [2024-12-11 15:02:21.082395] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:38.536 [2024-12-11 15:02:21.082645] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:38.536 [2024-12-11 15:02:21.082859] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.536 [2024-12-11 15:02:21.082878] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.536 [2024-12-11 15:02:21.082890] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.536 [2024-12-11 15:02:21.082901] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.536 [2024-12-11 15:02:21.095135] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.536 [2024-12-11 15:02:21.095450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.536 [2024-12-11 15:02:21.095477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:38.536 [2024-12-11 15:02:21.095492] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:38.536 [2024-12-11 15:02:21.095720] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:38.536 [2024-12-11 15:02:21.095931] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.536 [2024-12-11 15:02:21.095950] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.536 [2024-12-11 15:02:21.095962] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.536 [2024-12-11 15:02:21.095978] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.536 [2024-12-11 15:02:21.108412] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.536 [2024-12-11 15:02:21.108787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.536 [2024-12-11 15:02:21.108817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:38.536 [2024-12-11 15:02:21.108833] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:38.536 [2024-12-11 15:02:21.109086] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:38.536 [2024-12-11 15:02:21.109296] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.536 [2024-12-11 15:02:21.109314] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.536 [2024-12-11 15:02:21.109326] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.536 [2024-12-11 15:02:21.109338] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.536 [2024-12-11 15:02:21.121607] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.536 [2024-12-11 15:02:21.122022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.536 [2024-12-11 15:02:21.122064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:38.536 [2024-12-11 15:02:21.122081] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:38.536 [2024-12-11 15:02:21.122323] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:38.536 [2024-12-11 15:02:21.122532] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.536 [2024-12-11 15:02:21.122562] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.536 [2024-12-11 15:02:21.122575] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.536 [2024-12-11 15:02:21.122586] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.536 [2024-12-11 15:02:21.134718] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.536 [2024-12-11 15:02:21.135102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.536 [2024-12-11 15:02:21.135143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:38.536 [2024-12-11 15:02:21.135160] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:38.536 [2024-12-11 15:02:21.135385] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:38.536 [2024-12-11 15:02:21.135625] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.536 [2024-12-11 15:02:21.135645] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.536 [2024-12-11 15:02:21.135658] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.536 [2024-12-11 15:02:21.135669] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.536 [2024-12-11 15:02:21.147894] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.536 [2024-12-11 15:02:21.148267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.536 [2024-12-11 15:02:21.148309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:38.536 [2024-12-11 15:02:21.148324] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:38.536 [2024-12-11 15:02:21.148578] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:38.536 [2024-12-11 15:02:21.148773] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.536 [2024-12-11 15:02:21.148791] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.536 [2024-12-11 15:02:21.148804] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.536 [2024-12-11 15:02:21.148815] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.536 [2024-12-11 15:02:21.160936] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.536 [2024-12-11 15:02:21.161343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.536 [2024-12-11 15:02:21.161385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:38.536 [2024-12-11 15:02:21.161401] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:38.536 [2024-12-11 15:02:21.161649] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:38.536 [2024-12-11 15:02:21.161844] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.536 [2024-12-11 15:02:21.161862] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.536 [2024-12-11 15:02:21.161874] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.536 [2024-12-11 15:02:21.161885] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.536 [2024-12-11 15:02:21.174086] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.536 [2024-12-11 15:02:21.174574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.536 [2024-12-11 15:02:21.174616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:38.536 [2024-12-11 15:02:21.174633] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:38.536 [2024-12-11 15:02:21.174886] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:38.536 [2024-12-11 15:02:21.175094] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.536 [2024-12-11 15:02:21.175112] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.536 [2024-12-11 15:02:21.175124] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.536 [2024-12-11 15:02:21.175135] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.536 [2024-12-11 15:02:21.187086] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.536 [2024-12-11 15:02:21.187451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.536 [2024-12-11 15:02:21.187478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:38.536 [2024-12-11 15:02:21.187508] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:38.536 [2024-12-11 15:02:21.187771] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:38.536 [2024-12-11 15:02:21.187966] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.536 [2024-12-11 15:02:21.187984] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.536 [2024-12-11 15:02:21.187996] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.536 [2024-12-11 15:02:21.188007] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.536 [2024-12-11 15:02:21.200273] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.536 [2024-12-11 15:02:21.200642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.536 [2024-12-11 15:02:21.200671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:38.536 [2024-12-11 15:02:21.200687] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:38.536 [2024-12-11 15:02:21.200929] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:38.536 [2024-12-11 15:02:21.201138] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.536 [2024-12-11 15:02:21.201157] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.536 [2024-12-11 15:02:21.201169] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.537 [2024-12-11 15:02:21.201180] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.537 [2024-12-11 15:02:21.213475] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.537 [2024-12-11 15:02:21.213850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.537 [2024-12-11 15:02:21.213878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:38.537 [2024-12-11 15:02:21.213893] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:38.537 [2024-12-11 15:02:21.214129] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:38.537 [2024-12-11 15:02:21.214324] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.537 [2024-12-11 15:02:21.214342] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.537 [2024-12-11 15:02:21.214354] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.537 [2024-12-11 15:02:21.214365] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.537 [2024-12-11 15:02:21.226878] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.537 [2024-12-11 15:02:21.227296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.537 [2024-12-11 15:02:21.227328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:38.537 [2024-12-11 15:02:21.227344] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:38.537 [2024-12-11 15:02:21.227580] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:38.537 [2024-12-11 15:02:21.227799] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.537 [2024-12-11 15:02:21.227823] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.537 [2024-12-11 15:02:21.227836] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.537 [2024-12-11 15:02:21.227849] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.537 [2024-12-11 15:02:21.240242] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.537 [2024-12-11 15:02:21.240656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.537 [2024-12-11 15:02:21.240693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:38.537 [2024-12-11 15:02:21.240710] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:38.537 [2024-12-11 15:02:21.240940] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:38.537 [2024-12-11 15:02:21.241156] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.537 [2024-12-11 15:02:21.241175] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.537 [2024-12-11 15:02:21.241187] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.537 [2024-12-11 15:02:21.241199] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.537 [2024-12-11 15:02:21.253467] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.537 [2024-12-11 15:02:21.253913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.537 [2024-12-11 15:02:21.253942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:38.537 [2024-12-11 15:02:21.253958] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:38.537 [2024-12-11 15:02:21.254202] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:38.537 [2024-12-11 15:02:21.254403] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.537 [2024-12-11 15:02:21.254422] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.537 [2024-12-11 15:02:21.254434] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.537 [2024-12-11 15:02:21.254446] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.537 [2024-12-11 15:02:21.266818] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.537 [2024-12-11 15:02:21.267208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.537 [2024-12-11 15:02:21.267235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:38.537 [2024-12-11 15:02:21.267251] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:38.537 [2024-12-11 15:02:21.267494] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:38.537 [2024-12-11 15:02:21.267702] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.537 [2024-12-11 15:02:21.267722] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.537 [2024-12-11 15:02:21.267735] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.537 [2024-12-11 15:02:21.267751] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.537 [2024-12-11 15:02:21.280061] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.537 [2024-12-11 15:02:21.280420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.537 [2024-12-11 15:02:21.280462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:38.537 [2024-12-11 15:02:21.280478] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:38.537 [2024-12-11 15:02:21.280755] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:38.537 [2024-12-11 15:02:21.280975] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.537 [2024-12-11 15:02:21.280994] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.537 [2024-12-11 15:02:21.281007] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.537 [2024-12-11 15:02:21.281019] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.537 [2024-12-11 15:02:21.293397] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.537 [2024-12-11 15:02:21.293736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.537 [2024-12-11 15:02:21.293763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:38.537 [2024-12-11 15:02:21.293779] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:38.537 [2024-12-11 15:02:21.294002] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:38.537 [2024-12-11 15:02:21.294217] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.537 [2024-12-11 15:02:21.294236] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.537 [2024-12-11 15:02:21.294248] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.537 [2024-12-11 15:02:21.294260] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.798 [2024-12-11 15:02:21.306961] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.798 [2024-12-11 15:02:21.307310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.798 [2024-12-11 15:02:21.307339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:38.798 [2024-12-11 15:02:21.307355] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:38.798 [2024-12-11 15:02:21.307613] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:38.798 [2024-12-11 15:02:21.307814] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.798 [2024-12-11 15:02:21.307833] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.798 [2024-12-11 15:02:21.307845] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.798 [2024-12-11 15:02:21.307857] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.798 [2024-12-11 15:02:21.320238] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.798 [2024-12-11 15:02:21.320624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.798 [2024-12-11 15:02:21.320654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:38.798 [2024-12-11 15:02:21.320671] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:38.798 [2024-12-11 15:02:21.320916] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:38.798 [2024-12-11 15:02:21.321141] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.798 [2024-12-11 15:02:21.321162] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.798 [2024-12-11 15:02:21.321175] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.798 [2024-12-11 15:02:21.321188] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.798 [2024-12-11 15:02:21.333595] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.798 [2024-12-11 15:02:21.334003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.798 [2024-12-11 15:02:21.334031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:38.798 [2024-12-11 15:02:21.334047] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:38.798 [2024-12-11 15:02:21.334294] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:38.798 [2024-12-11 15:02:21.334509] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.798 [2024-12-11 15:02:21.334550] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.798 [2024-12-11 15:02:21.334566] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.798 [2024-12-11 15:02:21.334579] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.798 5275.75 IOPS, 20.61 MiB/s [2024-12-11T14:02:21.571Z] [2024-12-11 15:02:21.346971] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.798 [2024-12-11 15:02:21.347409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.798 [2024-12-11 15:02:21.347437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:38.798 [2024-12-11 15:02:21.347453] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:38.798 [2024-12-11 15:02:21.347706] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:38.798 [2024-12-11 15:02:21.347925] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.798 [2024-12-11 15:02:21.347944] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.798 [2024-12-11 15:02:21.347956] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.798 [2024-12-11 15:02:21.347968] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.798 [2024-12-11 15:02:21.360385] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.798 [2024-12-11 15:02:21.360707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.798 [2024-12-11 15:02:21.360750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:38.798 [2024-12-11 15:02:21.360766] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:38.798 [2024-12-11 15:02:21.360995] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:38.798 [2024-12-11 15:02:21.361212] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.798 [2024-12-11 15:02:21.361230] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.798 [2024-12-11 15:02:21.361243] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.798 [2024-12-11 15:02:21.361254] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.798 [2024-12-11 15:02:21.373704] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.798 [2024-12-11 15:02:21.374008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.798 [2024-12-11 15:02:21.374050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:38.798 [2024-12-11 15:02:21.374065] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:38.798 [2024-12-11 15:02:21.374288] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:38.798 [2024-12-11 15:02:21.374504] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.798 [2024-12-11 15:02:21.374523] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.798 [2024-12-11 15:02:21.374535] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.798 [2024-12-11 15:02:21.374556] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.798 [2024-12-11 15:02:21.387004] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.798 [2024-12-11 15:02:21.387379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.798 [2024-12-11 15:02:21.387423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:38.798 [2024-12-11 15:02:21.387438] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:38.798 [2024-12-11 15:02:21.387707] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:38.798 [2024-12-11 15:02:21.387926] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.798 [2024-12-11 15:02:21.387945] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.798 [2024-12-11 15:02:21.387958] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.798 [2024-12-11 15:02:21.387969] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.798 [2024-12-11 15:02:21.400260] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.798 [2024-12-11 15:02:21.400652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.798 [2024-12-11 15:02:21.400680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:38.798 [2024-12-11 15:02:21.400696] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:38.798 [2024-12-11 15:02:21.400918] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:38.798 [2024-12-11 15:02:21.401119] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.798 [2024-12-11 15:02:21.401142] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.798 [2024-12-11 15:02:21.401155] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.798 [2024-12-11 15:02:21.401168] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.798 [2024-12-11 15:02:21.413464] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.798 [2024-12-11 15:02:21.413858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.798 [2024-12-11 15:02:21.413900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:38.798 [2024-12-11 15:02:21.413917] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:38.799 [2024-12-11 15:02:21.414147] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:38.799 [2024-12-11 15:02:21.414363] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.799 [2024-12-11 15:02:21.414382] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.799 [2024-12-11 15:02:21.414395] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.799 [2024-12-11 15:02:21.414406] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.799 [2024-12-11 15:02:21.426719] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.799 [2024-12-11 15:02:21.427089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.799 [2024-12-11 15:02:21.427131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:38.799 [2024-12-11 15:02:21.427148] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:38.799 [2024-12-11 15:02:21.427401] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:38.799 [2024-12-11 15:02:21.427610] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.799 [2024-12-11 15:02:21.427629] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.799 [2024-12-11 15:02:21.427641] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.799 [2024-12-11 15:02:21.427653] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.799 [2024-12-11 15:02:21.440018] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.799 [2024-12-11 15:02:21.440437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.799 [2024-12-11 15:02:21.440479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:38.799 [2024-12-11 15:02:21.440496] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:38.799 [2024-12-11 15:02:21.440751] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:38.799 [2024-12-11 15:02:21.440970] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.799 [2024-12-11 15:02:21.440989] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.799 [2024-12-11 15:02:21.441001] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.799 [2024-12-11 15:02:21.441018] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.799 [2024-12-11 15:02:21.453358] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.799 [2024-12-11 15:02:21.453741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.799 [2024-12-11 15:02:21.453769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:38.799 [2024-12-11 15:02:21.453785] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:38.799 [2024-12-11 15:02:21.454028] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:38.799 [2024-12-11 15:02:21.454243] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.799 [2024-12-11 15:02:21.454263] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.799 [2024-12-11 15:02:21.454275] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.799 [2024-12-11 15:02:21.454286] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.799 [2024-12-11 15:02:21.466650] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.799 [2024-12-11 15:02:21.466988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.799 [2024-12-11 15:02:21.467017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:38.799 [2024-12-11 15:02:21.467033] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:38.799 [2024-12-11 15:02:21.467262] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:38.799 [2024-12-11 15:02:21.467479] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.799 [2024-12-11 15:02:21.467497] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.799 [2024-12-11 15:02:21.467510] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.799 [2024-12-11 15:02:21.467522] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.799 [2024-12-11 15:02:21.480012] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.799 [2024-12-11 15:02:21.480390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.799 [2024-12-11 15:02:21.480434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:38.799 [2024-12-11 15:02:21.480450] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:38.799 [2024-12-11 15:02:21.480729] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:38.799 [2024-12-11 15:02:21.480929] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.799 [2024-12-11 15:02:21.480948] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.799 [2024-12-11 15:02:21.480961] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.799 [2024-12-11 15:02:21.480974] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.799 [2024-12-11 15:02:21.493339] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.799 [2024-12-11 15:02:21.493791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.799 [2024-12-11 15:02:21.493821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:38.799 [2024-12-11 15:02:21.493837] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:38.799 [2024-12-11 15:02:21.494082] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:38.799 [2024-12-11 15:02:21.494297] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.799 [2024-12-11 15:02:21.494316] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.799 [2024-12-11 15:02:21.494329] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.799 [2024-12-11 15:02:21.494340] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.799 [2024-12-11 15:02:21.506604] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.799 [2024-12-11 15:02:21.506933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.799 [2024-12-11 15:02:21.506961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:38.799 [2024-12-11 15:02:21.506977] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:38.799 [2024-12-11 15:02:21.507206] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:38.799 [2024-12-11 15:02:21.507423] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.799 [2024-12-11 15:02:21.507442] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.799 [2024-12-11 15:02:21.507454] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.799 [2024-12-11 15:02:21.507466] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.799 [2024-12-11 15:02:21.519788] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.799 [2024-12-11 15:02:21.520164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.799 [2024-12-11 15:02:21.520205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:38.799 [2024-12-11 15:02:21.520222] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:38.799 [2024-12-11 15:02:21.520455] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:38.799 [2024-12-11 15:02:21.520680] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.799 [2024-12-11 15:02:21.520700] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.799 [2024-12-11 15:02:21.520712] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.799 [2024-12-11 15:02:21.520724] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.799 [2024-12-11 15:02:21.533100] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.799 [2024-12-11 15:02:21.533479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.799 [2024-12-11 15:02:21.533508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:38.799 [2024-12-11 15:02:21.533523] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:38.799 [2024-12-11 15:02:21.533784] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:38.799 [2024-12-11 15:02:21.534001] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.799 [2024-12-11 15:02:21.534020] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.799 [2024-12-11 15:02:21.534032] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.799 [2024-12-11 15:02:21.534044] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.799 [2024-12-11 15:02:21.546322] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.799 [2024-12-11 15:02:21.546730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.799 [2024-12-11 15:02:21.546758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:38.799 [2024-12-11 15:02:21.546774] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:38.799 [2024-12-11 15:02:21.546997] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:38.800 [2024-12-11 15:02:21.547214] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.800 [2024-12-11 15:02:21.547233] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.800 [2024-12-11 15:02:21.547245] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.800 [2024-12-11 15:02:21.547257] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:38.800 [2024-12-11 15:02:21.559747] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:38.800 [2024-12-11 15:02:21.560185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:38.800 [2024-12-11 15:02:21.560214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:38.800 [2024-12-11 15:02:21.560230] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:38.800 [2024-12-11 15:02:21.560473] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:38.800 [2024-12-11 15:02:21.560707] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:38.800 [2024-12-11 15:02:21.560728] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:38.800 [2024-12-11 15:02:21.560741] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:38.800 [2024-12-11 15:02:21.560753] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.060 [2024-12-11 15:02:21.573173] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.060 [2024-12-11 15:02:21.573577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.060 [2024-12-11 15:02:21.573620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:39.060 [2024-12-11 15:02:21.573638] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:39.060 [2024-12-11 15:02:21.573877] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:39.060 [2024-12-11 15:02:21.574114] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.060 [2024-12-11 15:02:21.574141] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.060 [2024-12-11 15:02:21.574156] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.060 [2024-12-11 15:02:21.574169] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.060 [2024-12-11 15:02:21.586578] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.060 [2024-12-11 15:02:21.586953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.060 [2024-12-11 15:02:21.586996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:39.060 [2024-12-11 15:02:21.587013] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:39.060 [2024-12-11 15:02:21.587257] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:39.060 [2024-12-11 15:02:21.587458] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.060 [2024-12-11 15:02:21.587477] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.060 [2024-12-11 15:02:21.587489] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.060 [2024-12-11 15:02:21.587501] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.060 [2024-12-11 15:02:21.599883] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.060 [2024-12-11 15:02:21.600261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.060 [2024-12-11 15:02:21.600290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:39.060 [2024-12-11 15:02:21.600306] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:39.060 [2024-12-11 15:02:21.600559] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:39.060 [2024-12-11 15:02:21.600761] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.060 [2024-12-11 15:02:21.600780] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.060 [2024-12-11 15:02:21.600792] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.060 [2024-12-11 15:02:21.600804] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.060 [2024-12-11 15:02:21.613347] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.060 [2024-12-11 15:02:21.613701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.060 [2024-12-11 15:02:21.613731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:39.060 [2024-12-11 15:02:21.613747] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:39.060 [2024-12-11 15:02:21.613978] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:39.060 [2024-12-11 15:02:21.614193] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.060 [2024-12-11 15:02:21.614211] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.060 [2024-12-11 15:02:21.614223] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.060 [2024-12-11 15:02:21.614239] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.060 [2024-12-11 15:02:21.626735] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.060 [2024-12-11 15:02:21.627159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.060 [2024-12-11 15:02:21.627187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:39.060 [2024-12-11 15:02:21.627203] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:39.060 [2024-12-11 15:02:21.627447] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:39.060 [2024-12-11 15:02:21.627659] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.060 [2024-12-11 15:02:21.627678] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.060 [2024-12-11 15:02:21.627690] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.060 [2024-12-11 15:02:21.627702] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.060 [2024-12-11 15:02:21.640112] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.060 [2024-12-11 15:02:21.640505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.060 [2024-12-11 15:02:21.640555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:39.060 [2024-12-11 15:02:21.640574] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:39.060 [2024-12-11 15:02:21.640818] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:39.060 [2024-12-11 15:02:21.641018] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.060 [2024-12-11 15:02:21.641038] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.060 [2024-12-11 15:02:21.641050] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.060 [2024-12-11 15:02:21.641062] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.060 [2024-12-11 15:02:21.653370] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.060 [2024-12-11 15:02:21.653842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.060 [2024-12-11 15:02:21.653887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:39.060 [2024-12-11 15:02:21.653904] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:39.060 [2024-12-11 15:02:21.654174] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:39.060 [2024-12-11 15:02:21.654393] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.060 [2024-12-11 15:02:21.654413] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.060 [2024-12-11 15:02:21.654426] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.060 [2024-12-11 15:02:21.654438] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.060 [2024-12-11 15:02:21.666715] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.060 [2024-12-11 15:02:21.667054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.060 [2024-12-11 15:02:21.667083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:39.061 [2024-12-11 15:02:21.667099] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:39.061 [2024-12-11 15:02:21.667329] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:39.061 [2024-12-11 15:02:21.667554] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.061 [2024-12-11 15:02:21.667573] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.061 [2024-12-11 15:02:21.667585] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.061 [2024-12-11 15:02:21.667597] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.061 [2024-12-11 15:02:21.679967] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.061 [2024-12-11 15:02:21.680342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.061 [2024-12-11 15:02:21.680381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:39.061 [2024-12-11 15:02:21.680398] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:39.061 [2024-12-11 15:02:21.680650] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:39.061 [2024-12-11 15:02:21.680865] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.061 [2024-12-11 15:02:21.680884] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.061 [2024-12-11 15:02:21.680896] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.061 [2024-12-11 15:02:21.680908] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.061 [2024-12-11 15:02:21.693242] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.061 [2024-12-11 15:02:21.693596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.061 [2024-12-11 15:02:21.693626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:39.061 [2024-12-11 15:02:21.693642] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:39.061 [2024-12-11 15:02:21.693873] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:39.061 [2024-12-11 15:02:21.694089] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.061 [2024-12-11 15:02:21.694108] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.061 [2024-12-11 15:02:21.694120] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.061 [2024-12-11 15:02:21.694132] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.061 [2024-12-11 15:02:21.706611] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.061 [2024-12-11 15:02:21.706972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.061 [2024-12-11 15:02:21.707001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:39.061 [2024-12-11 15:02:21.707017] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:39.061 [2024-12-11 15:02:21.707264] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:39.061 [2024-12-11 15:02:21.707465] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.061 [2024-12-11 15:02:21.707484] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.061 [2024-12-11 15:02:21.707496] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.061 [2024-12-11 15:02:21.707508] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.061 [2024-12-11 15:02:21.719896] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.061 [2024-12-11 15:02:21.720315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.061 [2024-12-11 15:02:21.720357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:39.061 [2024-12-11 15:02:21.720373] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:39.061 [2024-12-11 15:02:21.720627] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:39.061 [2024-12-11 15:02:21.720828] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.061 [2024-12-11 15:02:21.720847] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.061 [2024-12-11 15:02:21.720859] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.061 [2024-12-11 15:02:21.720871] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.061 [2024-12-11 15:02:21.733302] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.061 [2024-12-11 15:02:21.733640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.061 [2024-12-11 15:02:21.733669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:39.061 [2024-12-11 15:02:21.733685] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:39.061 [2024-12-11 15:02:21.733909] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:39.061 [2024-12-11 15:02:21.734127] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.061 [2024-12-11 15:02:21.734146] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.061 [2024-12-11 15:02:21.734158] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.061 [2024-12-11 15:02:21.734170] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.061 [2024-12-11 15:02:21.746589] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.061 [2024-12-11 15:02:21.746967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.061 [2024-12-11 15:02:21.747009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:39.061 [2024-12-11 15:02:21.747026] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:39.061 [2024-12-11 15:02:21.747256] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:39.061 [2024-12-11 15:02:21.747471] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.061 [2024-12-11 15:02:21.747495] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.061 [2024-12-11 15:02:21.747508] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.061 [2024-12-11 15:02:21.747520] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.061 [2024-12-11 15:02:21.759905] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.061 [2024-12-11 15:02:21.760370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.061 [2024-12-11 15:02:21.760398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:39.061 [2024-12-11 15:02:21.760414] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:39.061 [2024-12-11 15:02:21.760667] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:39.061 [2024-12-11 15:02:21.760868] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.061 [2024-12-11 15:02:21.760886] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.061 [2024-12-11 15:02:21.760898] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.061 [2024-12-11 15:02:21.760910] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.061 [2024-12-11 15:02:21.773206] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.061 [2024-12-11 15:02:21.773537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.061 [2024-12-11 15:02:21.773571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:39.061 [2024-12-11 15:02:21.773586] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:39.061 [2024-12-11 15:02:21.773812] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:39.061 [2024-12-11 15:02:21.774028] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.061 [2024-12-11 15:02:21.774047] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.061 [2024-12-11 15:02:21.774060] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.061 [2024-12-11 15:02:21.774072] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.061 [2024-12-11 15:02:21.786591] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.061 [2024-12-11 15:02:21.786924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.061 [2024-12-11 15:02:21.786952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:39.061 [2024-12-11 15:02:21.786967] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:39.061 [2024-12-11 15:02:21.787198] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:39.061 [2024-12-11 15:02:21.787415] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.061 [2024-12-11 15:02:21.787434] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.061 [2024-12-11 15:02:21.787446] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.061 [2024-12-11 15:02:21.787463] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.061 [2024-12-11 15:02:21.799932] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.061 [2024-12-11 15:02:21.800322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.061 [2024-12-11 15:02:21.800349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:39.061 [2024-12-11 15:02:21.800381] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:39.061 [2024-12-11 15:02:21.800636] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:39.062 [2024-12-11 15:02:21.800837] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.062 [2024-12-11 15:02:21.800856] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.062 [2024-12-11 15:02:21.800868] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.062 [2024-12-11 15:02:21.800880] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.062 [2024-12-11 15:02:21.813205] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.062 [2024-12-11 15:02:21.813580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.062 [2024-12-11 15:02:21.813608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:39.062 [2024-12-11 15:02:21.813624] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:39.062 [2024-12-11 15:02:21.813868] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:39.062 [2024-12-11 15:02:21.814084] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.062 [2024-12-11 15:02:21.814103] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.062 [2024-12-11 15:02:21.814115] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.062 [2024-12-11 15:02:21.814127] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.062 [2024-12-11 15:02:21.826563] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.062 [2024-12-11 15:02:21.826951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.062 [2024-12-11 15:02:21.826980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:39.062 [2024-12-11 15:02:21.826996] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:39.062 [2024-12-11 15:02:21.827226] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:39.062 [2024-12-11 15:02:21.827465] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.062 [2024-12-11 15:02:21.827485] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.062 [2024-12-11 15:02:21.827498] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.062 [2024-12-11 15:02:21.827511] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.321 [2024-12-11 15:02:21.839942] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.321 [2024-12-11 15:02:21.840282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.321 [2024-12-11 15:02:21.840324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:39.321 [2024-12-11 15:02:21.840340] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:39.321 [2024-12-11 15:02:21.840575] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:39.321 [2024-12-11 15:02:21.840782] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.322 [2024-12-11 15:02:21.840801] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.322 [2024-12-11 15:02:21.840814] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.322 [2024-12-11 15:02:21.840827] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.322 [2024-12-11 15:02:21.853318] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.322 [2024-12-11 15:02:21.853662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.322 [2024-12-11 15:02:21.853706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:39.322 [2024-12-11 15:02:21.853722] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:39.322 [2024-12-11 15:02:21.853951] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:39.322 [2024-12-11 15:02:21.854166] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.322 [2024-12-11 15:02:21.854185] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.322 [2024-12-11 15:02:21.854197] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.322 [2024-12-11 15:02:21.854209] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.322 [2024-12-11 15:02:21.866522] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.322 [2024-12-11 15:02:21.866952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.322 [2024-12-11 15:02:21.866981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:39.322 [2024-12-11 15:02:21.866997] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:39.322 [2024-12-11 15:02:21.867242] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:39.322 [2024-12-11 15:02:21.867442] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.322 [2024-12-11 15:02:21.867460] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.322 [2024-12-11 15:02:21.867473] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.322 [2024-12-11 15:02:21.867484] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.322 [2024-12-11 15:02:21.879762] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.322 [2024-12-11 15:02:21.880184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.322 [2024-12-11 15:02:21.880212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:39.322 [2024-12-11 15:02:21.880228] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:39.322 [2024-12-11 15:02:21.880476] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:39.322 [2024-12-11 15:02:21.880687] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.322 [2024-12-11 15:02:21.880706] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.322 [2024-12-11 15:02:21.880719] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.322 [2024-12-11 15:02:21.880730] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.322 [2024-12-11 15:02:21.893066] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.322 [2024-12-11 15:02:21.893503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.322 [2024-12-11 15:02:21.893531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:39.322 [2024-12-11 15:02:21.893557] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:39.322 [2024-12-11 15:02:21.893805] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:39.322 [2024-12-11 15:02:21.894019] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.322 [2024-12-11 15:02:21.894038] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.322 [2024-12-11 15:02:21.894051] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.322 [2024-12-11 15:02:21.894062] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.322 [2024-12-11 15:02:21.906353] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.322 [2024-12-11 15:02:21.906795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.322 [2024-12-11 15:02:21.906823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:39.322 [2024-12-11 15:02:21.906840] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:39.322 [2024-12-11 15:02:21.907087] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:39.322 [2024-12-11 15:02:21.907303] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.322 [2024-12-11 15:02:21.907322] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.322 [2024-12-11 15:02:21.907335] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.322 [2024-12-11 15:02:21.907346] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.322 [2024-12-11 15:02:21.919608] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.322 [2024-12-11 15:02:21.919925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.322 [2024-12-11 15:02:21.919954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:39.322 [2024-12-11 15:02:21.919970] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:39.322 [2024-12-11 15:02:21.920192] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:39.322 [2024-12-11 15:02:21.920393] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.322 [2024-12-11 15:02:21.920416] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.322 [2024-12-11 15:02:21.920429] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.322 [2024-12-11 15:02:21.920441] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.322 [2024-12-11 15:02:21.933062] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.322 [2024-12-11 15:02:21.933499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.322 [2024-12-11 15:02:21.933527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:39.322 [2024-12-11 15:02:21.933552] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:39.322 [2024-12-11 15:02:21.933787] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:39.322 [2024-12-11 15:02:21.934005] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.322 [2024-12-11 15:02:21.934024] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.322 [2024-12-11 15:02:21.934036] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.322 [2024-12-11 15:02:21.934048] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.322 [2024-12-11 15:02:21.946368] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.322 [2024-12-11 15:02:21.946827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.322 [2024-12-11 15:02:21.946856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:39.322 [2024-12-11 15:02:21.946872] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:39.322 [2024-12-11 15:02:21.947116] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:39.322 [2024-12-11 15:02:21.947316] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.322 [2024-12-11 15:02:21.947334] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.322 [2024-12-11 15:02:21.947346] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.322 [2024-12-11 15:02:21.947358] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.322 [2024-12-11 15:02:21.959772] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.322 [2024-12-11 15:02:21.960100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.322 [2024-12-11 15:02:21.960129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:39.322 [2024-12-11 15:02:21.960144] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:39.322 [2024-12-11 15:02:21.960359] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:39.322 [2024-12-11 15:02:21.960606] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.322 [2024-12-11 15:02:21.960627] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.322 [2024-12-11 15:02:21.960639] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.322 [2024-12-11 15:02:21.960658] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.322 [2024-12-11 15:02:21.973106] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.322 [2024-12-11 15:02:21.973550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.322 [2024-12-11 15:02:21.973578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:39.322 [2024-12-11 15:02:21.973594] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:39.322 [2024-12-11 15:02:21.973836] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:39.322 [2024-12-11 15:02:21.974035] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.323 [2024-12-11 15:02:21.974054] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.323 [2024-12-11 15:02:21.974066] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.323 [2024-12-11 15:02:21.974078] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.323 [2024-12-11 15:02:21.986364] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.323 [2024-12-11 15:02:21.986745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.323 [2024-12-11 15:02:21.986774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:39.323 [2024-12-11 15:02:21.986790] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:39.323 [2024-12-11 15:02:21.987032] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:39.323 [2024-12-11 15:02:21.987247] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.323 [2024-12-11 15:02:21.987266] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.323 [2024-12-11 15:02:21.987278] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.323 [2024-12-11 15:02:21.987290] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.323 [2024-12-11 15:02:21.999736] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.323 [2024-12-11 15:02:22.000098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.323 [2024-12-11 15:02:22.000126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:39.323 [2024-12-11 15:02:22.000142] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:39.323 [2024-12-11 15:02:22.000386] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:39.323 [2024-12-11 15:02:22.000596] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.323 [2024-12-11 15:02:22.000616] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.323 [2024-12-11 15:02:22.000628] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.323 [2024-12-11 15:02:22.000639] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.323 [2024-12-11 15:02:22.013128] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.323 [2024-12-11 15:02:22.013536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.323 [2024-12-11 15:02:22.013585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:39.323 [2024-12-11 15:02:22.013602] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:39.323 [2024-12-11 15:02:22.013846] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:39.323 [2024-12-11 15:02:22.014047] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.323 [2024-12-11 15:02:22.014065] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.323 [2024-12-11 15:02:22.014078] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.323 [2024-12-11 15:02:22.014089] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.323 [2024-12-11 15:02:22.026379] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.323 [2024-12-11 15:02:22.026822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.323 [2024-12-11 15:02:22.026850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:39.323 [2024-12-11 15:02:22.026866] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:39.323 [2024-12-11 15:02:22.027110] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:39.323 [2024-12-11 15:02:22.027325] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.323 [2024-12-11 15:02:22.027344] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.323 [2024-12-11 15:02:22.027356] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.323 [2024-12-11 15:02:22.027368] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.323 [2024-12-11 15:02:22.039748] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.323 [2024-12-11 15:02:22.040171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.323 [2024-12-11 15:02:22.040198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:39.323 [2024-12-11 15:02:22.040215] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:39.323 [2024-12-11 15:02:22.040457] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:39.323 [2024-12-11 15:02:22.040687] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.323 [2024-12-11 15:02:22.040707] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.323 [2024-12-11 15:02:22.040720] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.323 [2024-12-11 15:02:22.040732] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.323 [2024-12-11 15:02:22.053020] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.323 [2024-12-11 15:02:22.053423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.323 [2024-12-11 15:02:22.053465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:39.323 [2024-12-11 15:02:22.053482] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:39.323 [2024-12-11 15:02:22.053729] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:39.323 [2024-12-11 15:02:22.053949] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.323 [2024-12-11 15:02:22.053968] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.323 [2024-12-11 15:02:22.053980] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.323 [2024-12-11 15:02:22.053991] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.323 [2024-12-11 15:02:22.066232] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.323 [2024-12-11 15:02:22.066628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.323 [2024-12-11 15:02:22.066656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:39.323 [2024-12-11 15:02:22.066672] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:39.323 [2024-12-11 15:02:22.066895] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:39.323 [2024-12-11 15:02:22.067112] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.323 [2024-12-11 15:02:22.067130] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.323 [2024-12-11 15:02:22.067143] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.323 [2024-12-11 15:02:22.067154] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.323 [2024-12-11 15:02:22.079606] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.323 [2024-12-11 15:02:22.079980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.323 [2024-12-11 15:02:22.080009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:39.323 [2024-12-11 15:02:22.080025] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:39.323 [2024-12-11 15:02:22.080254] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:39.323 [2024-12-11 15:02:22.080485] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.323 [2024-12-11 15:02:22.080506] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.323 [2024-12-11 15:02:22.080520] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.323 [2024-12-11 15:02:22.080533] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.583 [2024-12-11 15:02:22.093230] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.583 [2024-12-11 15:02:22.093609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.583 [2024-12-11 15:02:22.093638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:39.583 [2024-12-11 15:02:22.093655] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:39.583 [2024-12-11 15:02:22.093886] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:39.583 [2024-12-11 15:02:22.094102] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.583 [2024-12-11 15:02:22.094126] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.583 [2024-12-11 15:02:22.094139] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.583 [2024-12-11 15:02:22.094151] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.583 [2024-12-11 15:02:22.106667] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.583 [2024-12-11 15:02:22.107054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.583 [2024-12-11 15:02:22.107082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:39.583 [2024-12-11 15:02:22.107099] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:39.583 [2024-12-11 15:02:22.107343] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:39.583 [2024-12-11 15:02:22.107571] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.583 [2024-12-11 15:02:22.107591] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.583 [2024-12-11 15:02:22.107603] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.583 [2024-12-11 15:02:22.107616] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.583 [2024-12-11 15:02:22.119927] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.583 [2024-12-11 15:02:22.120292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.583 [2024-12-11 15:02:22.120318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:39.583 [2024-12-11 15:02:22.120333] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:39.583 [2024-12-11 15:02:22.120577] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:39.583 [2024-12-11 15:02:22.120794] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.583 [2024-12-11 15:02:22.120813] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.583 [2024-12-11 15:02:22.120825] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.583 [2024-12-11 15:02:22.120837] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.583 [2024-12-11 15:02:22.133182] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.583 [2024-12-11 15:02:22.133657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.583 [2024-12-11 15:02:22.133684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:39.583 [2024-12-11 15:02:22.133714] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:39.583 [2024-12-11 15:02:22.133964] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:39.583 [2024-12-11 15:02:22.134159] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.583 [2024-12-11 15:02:22.134177] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.583 [2024-12-11 15:02:22.134188] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.583 [2024-12-11 15:02:22.134204] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.583 [2024-12-11 15:02:22.146285] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.583 [2024-12-11 15:02:22.146653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.583 [2024-12-11 15:02:22.146695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:39.583 [2024-12-11 15:02:22.146710] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:39.583 [2024-12-11 15:02:22.146964] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:39.583 [2024-12-11 15:02:22.147173] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.583 [2024-12-11 15:02:22.147191] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.583 [2024-12-11 15:02:22.147203] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.583 [2024-12-11 15:02:22.147214] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.583 [2024-12-11 15:02:22.159540] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.583 [2024-12-11 15:02:22.159969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.583 [2024-12-11 15:02:22.160011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:39.583 [2024-12-11 15:02:22.160028] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:39.583 [2024-12-11 15:02:22.160271] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:39.583 [2024-12-11 15:02:22.160495] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.583 [2024-12-11 15:02:22.160514] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.583 [2024-12-11 15:02:22.160527] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.583 [2024-12-11 15:02:22.160538] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.583 [2024-12-11 15:02:22.172679] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.583 [2024-12-11 15:02:22.173043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.583 [2024-12-11 15:02:22.173071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:39.583 [2024-12-11 15:02:22.173087] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:39.583 [2024-12-11 15:02:22.173326] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:39.583 [2024-12-11 15:02:22.173535] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.583 [2024-12-11 15:02:22.173564] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.583 [2024-12-11 15:02:22.173577] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.583 [2024-12-11 15:02:22.173589] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.583 [2024-12-11 15:02:22.185713] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.583 [2024-12-11 15:02:22.186102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.583 [2024-12-11 15:02:22.186143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:39.583 [2024-12-11 15:02:22.186158] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:39.583 [2024-12-11 15:02:22.186383] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:39.583 [2024-12-11 15:02:22.186605] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.583 [2024-12-11 15:02:22.186624] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.583 [2024-12-11 15:02:22.186636] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.583 [2024-12-11 15:02:22.186647] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.583 [2024-12-11 15:02:22.198810] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.583 [2024-12-11 15:02:22.199177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.583 [2024-12-11 15:02:22.199205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:39.583 [2024-12-11 15:02:22.199221] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:39.583 [2024-12-11 15:02:22.199457] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:39.583 [2024-12-11 15:02:22.199680] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.583 [2024-12-11 15:02:22.199699] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.583 [2024-12-11 15:02:22.199711] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.583 [2024-12-11 15:02:22.199722] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.583 [2024-12-11 15:02:22.211926] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.583 [2024-12-11 15:02:22.212290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.583 [2024-12-11 15:02:22.212332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:39.584 [2024-12-11 15:02:22.212348] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:39.584 [2024-12-11 15:02:22.212624] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:39.584 [2024-12-11 15:02:22.212819] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.584 [2024-12-11 15:02:22.212838] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.584 [2024-12-11 15:02:22.212850] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.584 [2024-12-11 15:02:22.212861] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.584 [2024-12-11 15:02:22.225130] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.584 [2024-12-11 15:02:22.225465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.584 [2024-12-11 15:02:22.225491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:39.584 [2024-12-11 15:02:22.225506] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:39.584 [2024-12-11 15:02:22.225758] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:39.584 [2024-12-11 15:02:22.225969] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.584 [2024-12-11 15:02:22.225987] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.584 [2024-12-11 15:02:22.226000] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.584 [2024-12-11 15:02:22.226011] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.584 [2024-12-11 15:02:22.238353] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.584 [2024-12-11 15:02:22.238725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.584 [2024-12-11 15:02:22.238753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:39.584 [2024-12-11 15:02:22.238769] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:39.584 [2024-12-11 15:02:22.239004] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:39.584 [2024-12-11 15:02:22.239224] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.584 [2024-12-11 15:02:22.239243] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.584 [2024-12-11 15:02:22.239255] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.584 [2024-12-11 15:02:22.239267] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.584 [2024-12-11 15:02:22.251439] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.584 [2024-12-11 15:02:22.251843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.584 [2024-12-11 15:02:22.251868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:39.584 [2024-12-11 15:02:22.251883] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:39.584 [2024-12-11 15:02:22.252114] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:39.584 [2024-12-11 15:02:22.252322] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.584 [2024-12-11 15:02:22.252341] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.584 [2024-12-11 15:02:22.252353] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.584 [2024-12-11 15:02:22.252364] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.584 [2024-12-11 15:02:22.264585] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.584 [2024-12-11 15:02:22.264940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.584 [2024-12-11 15:02:22.264982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:39.584 [2024-12-11 15:02:22.264998] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:39.584 [2024-12-11 15:02:22.265252] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:39.584 [2024-12-11 15:02:22.265446] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.584 [2024-12-11 15:02:22.265469] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.584 [2024-12-11 15:02:22.265481] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.584 [2024-12-11 15:02:22.265492] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.584 [2024-12-11 15:02:22.277791] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.584 [2024-12-11 15:02:22.278125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.584 [2024-12-11 15:02:22.278152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:39.584 [2024-12-11 15:02:22.278167] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:39.584 [2024-12-11 15:02:22.278390] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:39.584 [2024-12-11 15:02:22.278611] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.584 [2024-12-11 15:02:22.278630] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.584 [2024-12-11 15:02:22.278643] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.584 [2024-12-11 15:02:22.278654] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.584 [2024-12-11 15:02:22.290875] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.584 [2024-12-11 15:02:22.291207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.584 [2024-12-11 15:02:22.291233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:39.584 [2024-12-11 15:02:22.291248] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:39.584 [2024-12-11 15:02:22.291466] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:39.584 [2024-12-11 15:02:22.291703] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.584 [2024-12-11 15:02:22.291723] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.584 [2024-12-11 15:02:22.291735] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.584 [2024-12-11 15:02:22.291747] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.584 [2024-12-11 15:02:22.303968] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.584 [2024-12-11 15:02:22.304459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.584 [2024-12-11 15:02:22.304500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:39.584 [2024-12-11 15:02:22.304517] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:39.584 [2024-12-11 15:02:22.304794] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:39.584 [2024-12-11 15:02:22.304989] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.584 [2024-12-11 15:02:22.305007] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.584 [2024-12-11 15:02:22.305019] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.584 [2024-12-11 15:02:22.305035] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.584 [2024-12-11 15:02:22.317175] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.584 [2024-12-11 15:02:22.317540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.584 [2024-12-11 15:02:22.317573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:39.584 [2024-12-11 15:02:22.317589] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:39.584 [2024-12-11 15:02:22.317825] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:39.584 [2024-12-11 15:02:22.318035] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.584 [2024-12-11 15:02:22.318053] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.584 [2024-12-11 15:02:22.318065] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.584 [2024-12-11 15:02:22.318076] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.584 [2024-12-11 15:02:22.330256] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.584 [2024-12-11 15:02:22.330620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.584 [2024-12-11 15:02:22.330649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:39.584 [2024-12-11 15:02:22.330665] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:39.584 [2024-12-11 15:02:22.330880] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:39.584 [2024-12-11 15:02:22.331121] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.584 [2024-12-11 15:02:22.331142] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.584 [2024-12-11 15:02:22.331155] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.584 [2024-12-11 15:02:22.331167] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.584 4220.60 IOPS, 16.49 MiB/s [2024-12-11T14:02:22.357Z] [2024-12-11 15:02:22.343882] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.584 [2024-12-11 15:02:22.344208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.584 [2024-12-11 15:02:22.344250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:39.584 [2024-12-11 15:02:22.344266] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:39.584 [2024-12-11 15:02:22.344489] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:39.585 [2024-12-11 15:02:22.344733] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.585 [2024-12-11 15:02:22.344753] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.585 [2024-12-11 15:02:22.344766] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.585 [2024-12-11 15:02:22.344778] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.845 [2024-12-11 15:02:22.357202] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.845 [2024-12-11 15:02:22.357657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.845 [2024-12-11 15:02:22.357687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:39.845 [2024-12-11 15:02:22.357704] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:39.845 [2024-12-11 15:02:22.357955] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:39.845 [2024-12-11 15:02:22.358192] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.845 [2024-12-11 15:02:22.358213] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.845 [2024-12-11 15:02:22.358226] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.845 [2024-12-11 15:02:22.358238] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.845 [2024-12-11 15:02:22.370274] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.845 [2024-12-11 15:02:22.370707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.845 [2024-12-11 15:02:22.370751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:39.845 [2024-12-11 15:02:22.370767] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:39.845 [2024-12-11 15:02:22.371010] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:39.845 [2024-12-11 15:02:22.371219] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.845 [2024-12-11 15:02:22.371237] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.845 [2024-12-11 15:02:22.371248] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.845 [2024-12-11 15:02:22.371260] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.845 [2024-12-11 15:02:22.383385] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.845 [2024-12-11 15:02:22.383777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.845 [2024-12-11 15:02:22.383819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:39.845 [2024-12-11 15:02:22.383835] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:39.845 [2024-12-11 15:02:22.384059] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:39.845 [2024-12-11 15:02:22.384269] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.845 [2024-12-11 15:02:22.384288] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.845 [2024-12-11 15:02:22.384299] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.845 [2024-12-11 15:02:22.384310] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.845 [2024-12-11 15:02:22.396436] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.845 [2024-12-11 15:02:22.396806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.845 [2024-12-11 15:02:22.396834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:39.845 [2024-12-11 15:02:22.396855] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:39.845 [2024-12-11 15:02:22.397095] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:39.845 [2024-12-11 15:02:22.397306] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.845 [2024-12-11 15:02:22.397324] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.845 [2024-12-11 15:02:22.397337] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.845 [2024-12-11 15:02:22.397348] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.845 [2024-12-11 15:02:22.409614] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.845 [2024-12-11 15:02:22.409965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.845 [2024-12-11 15:02:22.409992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:39.845 [2024-12-11 15:02:22.410007] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:39.845 [2024-12-11 15:02:22.410243] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:39.845 [2024-12-11 15:02:22.410452] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.845 [2024-12-11 15:02:22.410470] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.845 [2024-12-11 15:02:22.410482] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.845 [2024-12-11 15:02:22.410494] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.845 [2024-12-11 15:02:22.422786] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.845 [2024-12-11 15:02:22.423249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.845 [2024-12-11 15:02:22.423300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:39.845 [2024-12-11 15:02:22.423316] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:39.845 [2024-12-11 15:02:22.423590] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:39.845 [2024-12-11 15:02:22.423785] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.845 [2024-12-11 15:02:22.423803] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.845 [2024-12-11 15:02:22.423815] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.845 [2024-12-11 15:02:22.423826] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.845 [2024-12-11 15:02:22.435944] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.845 [2024-12-11 15:02:22.436361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.845 [2024-12-11 15:02:22.436411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:39.845 [2024-12-11 15:02:22.436426] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:39.845 [2024-12-11 15:02:22.436697] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:39.845 [2024-12-11 15:02:22.436891] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.845 [2024-12-11 15:02:22.436914] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.845 [2024-12-11 15:02:22.436927] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.845 [2024-12-11 15:02:22.436938] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.845 [2024-12-11 15:02:22.449130] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.845 [2024-12-11 15:02:22.449612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.845 [2024-12-11 15:02:22.449641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:39.845 [2024-12-11 15:02:22.449657] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:39.845 [2024-12-11 15:02:22.449903] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:39.845 [2024-12-11 15:02:22.450098] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.845 [2024-12-11 15:02:22.450116] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.845 [2024-12-11 15:02:22.450127] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.845 [2024-12-11 15:02:22.450139] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.845 [2024-12-11 15:02:22.462279] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.845 [2024-12-11 15:02:22.462657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.845 [2024-12-11 15:02:22.462701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:39.845 [2024-12-11 15:02:22.462716] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:39.845 [2024-12-11 15:02:22.462988] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:39.845 [2024-12-11 15:02:22.463183] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.845 [2024-12-11 15:02:22.463201] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.845 [2024-12-11 15:02:22.463213] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.845 [2024-12-11 15:02:22.463225] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.845 [2024-12-11 15:02:22.475492] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.845 [2024-12-11 15:02:22.475819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.845 [2024-12-11 15:02:22.475846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:39.845 [2024-12-11 15:02:22.475862] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:39.846 [2024-12-11 15:02:22.476081] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:39.846 [2024-12-11 15:02:22.476291] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.846 [2024-12-11 15:02:22.476309] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.846 [2024-12-11 15:02:22.476322] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.846 [2024-12-11 15:02:22.476337] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.846 [2024-12-11 15:02:22.488748] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.846 [2024-12-11 15:02:22.489175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.846 [2024-12-11 15:02:22.489202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:39.846 [2024-12-11 15:02:22.489218] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:39.846 [2024-12-11 15:02:22.489455] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:39.846 [2024-12-11 15:02:22.489677] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.846 [2024-12-11 15:02:22.489697] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.846 [2024-12-11 15:02:22.489709] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.846 [2024-12-11 15:02:22.489721] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.846 [2024-12-11 15:02:22.501975] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.846 [2024-12-11 15:02:22.502307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.846 [2024-12-11 15:02:22.502335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:39.846 [2024-12-11 15:02:22.502349] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:39.846 [2024-12-11 15:02:22.502578] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:39.846 [2024-12-11 15:02:22.502788] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.846 [2024-12-11 15:02:22.502806] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.846 [2024-12-11 15:02:22.502818] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.846 [2024-12-11 15:02:22.502830] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.846 [2024-12-11 15:02:22.514986] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.846 [2024-12-11 15:02:22.515353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.846 [2024-12-11 15:02:22.515396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:39.846 [2024-12-11 15:02:22.515412] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:39.846 [2024-12-11 15:02:22.515678] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:39.846 [2024-12-11 15:02:22.515912] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.846 [2024-12-11 15:02:22.515931] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.846 [2024-12-11 15:02:22.515942] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.846 [2024-12-11 15:02:22.515954] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.846 [2024-12-11 15:02:22.528218] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.846 [2024-12-11 15:02:22.528619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.846 [2024-12-11 15:02:22.528647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:39.846 [2024-12-11 15:02:22.528663] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:39.846 [2024-12-11 15:02:22.528885] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:39.846 [2024-12-11 15:02:22.529095] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.846 [2024-12-11 15:02:22.529114] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.846 [2024-12-11 15:02:22.529126] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.846 [2024-12-11 15:02:22.529137] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.846 [2024-12-11 15:02:22.541329] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.846 [2024-12-11 15:02:22.541700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.846 [2024-12-11 15:02:22.541742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:39.846 [2024-12-11 15:02:22.541757] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:39.846 [2024-12-11 15:02:22.542005] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:39.846 [2024-12-11 15:02:22.542199] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.846 [2024-12-11 15:02:22.542217] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.846 [2024-12-11 15:02:22.542229] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.846 [2024-12-11 15:02:22.542240] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.846 [2024-12-11 15:02:22.554515] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.846 [2024-12-11 15:02:22.554950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.846 [2024-12-11 15:02:22.554978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:39.846 [2024-12-11 15:02:22.554993] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:39.846 [2024-12-11 15:02:22.555230] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:39.846 [2024-12-11 15:02:22.555439] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.846 [2024-12-11 15:02:22.555457] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.846 [2024-12-11 15:02:22.555469] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.846 [2024-12-11 15:02:22.555480] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.846 [2024-12-11 15:02:22.567710] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.846 [2024-12-11 15:02:22.568061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.846 [2024-12-11 15:02:22.568089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:39.846 [2024-12-11 15:02:22.568110] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:39.846 [2024-12-11 15:02:22.568347] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:39.846 [2024-12-11 15:02:22.568585] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.846 [2024-12-11 15:02:22.568605] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.846 [2024-12-11 15:02:22.568617] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.846 [2024-12-11 15:02:22.568629] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.846 [2024-12-11 15:02:22.580923] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.846 [2024-12-11 15:02:22.581326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.846 [2024-12-11 15:02:22.581369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:39.846 [2024-12-11 15:02:22.581385] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:39.846 [2024-12-11 15:02:22.581619] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:39.846 [2024-12-11 15:02:22.581826] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.846 [2024-12-11 15:02:22.581846] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.846 [2024-12-11 15:02:22.581859] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.846 [2024-12-11 15:02:22.581872] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.846 [2024-12-11 15:02:22.594236] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.846 [2024-12-11 15:02:22.594628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.846 [2024-12-11 15:02:22.594658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:39.846 [2024-12-11 15:02:22.594674] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:39.846 [2024-12-11 15:02:22.594906] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:39.846 [2024-12-11 15:02:22.595122] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.846 [2024-12-11 15:02:22.595141] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.846 [2024-12-11 15:02:22.595154] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.846 [2024-12-11 15:02:22.595165] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:39.846 [2024-12-11 15:02:22.607439] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:39.846 [2024-12-11 15:02:22.607769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:39.846 [2024-12-11 15:02:22.607811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:39.846 [2024-12-11 15:02:22.607827] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:39.846 [2024-12-11 15:02:22.608058] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:39.846 [2024-12-11 15:02:22.608268] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:39.847 [2024-12-11 15:02:22.608293] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:39.847 [2024-12-11 15:02:22.608306] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:39.847 [2024-12-11 15:02:22.608317] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.107 [2024-12-11 15:02:22.620934] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.107 [2024-12-11 15:02:22.621302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.107 [2024-12-11 15:02:22.621330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:40.107 [2024-12-11 15:02:22.621345] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:40.107 [2024-12-11 15:02:22.621576] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:40.107 [2024-12-11 15:02:22.621777] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.107 [2024-12-11 15:02:22.621796] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.107 [2024-12-11 15:02:22.621808] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.107 [2024-12-11 15:02:22.621821] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.107 [2024-12-11 15:02:22.634189] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.107 [2024-12-11 15:02:22.634509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.107 [2024-12-11 15:02:22.634536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:40.107 [2024-12-11 15:02:22.634564] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:40.107 [2024-12-11 15:02:22.634762] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:40.107 [2024-12-11 15:02:22.634971] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.107 [2024-12-11 15:02:22.634990] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.107 [2024-12-11 15:02:22.635002] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.107 [2024-12-11 15:02:22.635013] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.107 [2024-12-11 15:02:22.647312] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.107 [2024-12-11 15:02:22.647701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.107 [2024-12-11 15:02:22.647729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:40.107 [2024-12-11 15:02:22.647744] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:40.107 [2024-12-11 15:02:22.647947] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:40.107 [2024-12-11 15:02:22.648172] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.107 [2024-12-11 15:02:22.648191] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.107 [2024-12-11 15:02:22.648203] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.107 [2024-12-11 15:02:22.648218] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.107 [2024-12-11 15:02:22.660512] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.107 [2024-12-11 15:02:22.660884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.107 [2024-12-11 15:02:22.660926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:40.107 [2024-12-11 15:02:22.660942] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:40.107 [2024-12-11 15:02:22.661192] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:40.107 [2024-12-11 15:02:22.661392] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.107 [2024-12-11 15:02:22.661411] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.107 [2024-12-11 15:02:22.661423] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.107 [2024-12-11 15:02:22.661435] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.107 [2024-12-11 15:02:22.673712] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.107 [2024-12-11 15:02:22.674074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.107 [2024-12-11 15:02:22.674116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:40.107 [2024-12-11 15:02:22.674131] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:40.107 [2024-12-11 15:02:22.674380] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:40.107 [2024-12-11 15:02:22.674586] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.107 [2024-12-11 15:02:22.674605] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.107 [2024-12-11 15:02:22.674617] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.107 [2024-12-11 15:02:22.674629] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.107 [2024-12-11 15:02:22.686829] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.107 [2024-12-11 15:02:22.687209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.107 [2024-12-11 15:02:22.687238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:40.107 [2024-12-11 15:02:22.687254] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:40.107 [2024-12-11 15:02:22.687497] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:40.107 [2024-12-11 15:02:22.687719] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.107 [2024-12-11 15:02:22.687738] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.107 [2024-12-11 15:02:22.687750] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.107 [2024-12-11 15:02:22.687761] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.107 [2024-12-11 15:02:22.699948] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.107 [2024-12-11 15:02:22.700363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.107 [2024-12-11 15:02:22.700404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:40.107 [2024-12-11 15:02:22.700420] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:40.107 [2024-12-11 15:02:22.700673] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:40.107 [2024-12-11 15:02:22.700868] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.107 [2024-12-11 15:02:22.700886] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.107 [2024-12-11 15:02:22.700898] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.107 [2024-12-11 15:02:22.700910] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.107 [2024-12-11 15:02:22.713232] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.107 [2024-12-11 15:02:22.713663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.107 [2024-12-11 15:02:22.713706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:40.107 [2024-12-11 15:02:22.713723] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:40.107 [2024-12-11 15:02:22.713963] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:40.107 [2024-12-11 15:02:22.714172] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.107 [2024-12-11 15:02:22.714191] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.107 [2024-12-11 15:02:22.714203] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.107 [2024-12-11 15:02:22.714214] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.107 [2024-12-11 15:02:22.726441] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.107 [2024-12-11 15:02:22.726815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.107 [2024-12-11 15:02:22.726843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:40.107 [2024-12-11 15:02:22.726859] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:40.108 [2024-12-11 15:02:22.727080] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:40.108 [2024-12-11 15:02:22.727289] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.108 [2024-12-11 15:02:22.727307] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.108 [2024-12-11 15:02:22.727319] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.108 [2024-12-11 15:02:22.727331] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.108 [2024-12-11 15:02:22.739706] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.108 [2024-12-11 15:02:22.740028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.108 [2024-12-11 15:02:22.740056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:40.108 [2024-12-11 15:02:22.740072] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:40.108 [2024-12-11 15:02:22.740302] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:40.108 [2024-12-11 15:02:22.740511] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.108 [2024-12-11 15:02:22.740553] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.108 [2024-12-11 15:02:22.740569] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.108 [2024-12-11 15:02:22.740581] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.108 [2024-12-11 15:02:22.752832] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.108 [2024-12-11 15:02:22.753197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.108 [2024-12-11 15:02:22.753224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:40.108 [2024-12-11 15:02:22.753239] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:40.108 [2024-12-11 15:02:22.753476] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:40.108 [2024-12-11 15:02:22.753696] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.108 [2024-12-11 15:02:22.753716] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.108 [2024-12-11 15:02:22.753728] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.108 [2024-12-11 15:02:22.753739] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.108 [2024-12-11 15:02:22.766082] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.108 [2024-12-11 15:02:22.766740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.108 [2024-12-11 15:02:22.766779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:40.108 [2024-12-11 15:02:22.766797] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:40.108 [2024-12-11 15:02:22.767030] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:40.108 [2024-12-11 15:02:22.767226] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.108 [2024-12-11 15:02:22.767245] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.108 [2024-12-11 15:02:22.767257] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.108 [2024-12-11 15:02:22.767269] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.108 [2024-12-11 15:02:22.779233] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.108 [2024-12-11 15:02:22.779619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.108 [2024-12-11 15:02:22.779664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:40.108 [2024-12-11 15:02:22.779681] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:40.108 [2024-12-11 15:02:22.779950] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:40.108 [2024-12-11 15:02:22.780144] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.108 [2024-12-11 15:02:22.780168] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.108 [2024-12-11 15:02:22.780181] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.108 [2024-12-11 15:02:22.780192] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.108 [2024-12-11 15:02:22.792479] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.108 [2024-12-11 15:02:22.792922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.108 [2024-12-11 15:02:22.792965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:40.108 [2024-12-11 15:02:22.792982] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:40.108 [2024-12-11 15:02:22.793223] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:40.108 [2024-12-11 15:02:22.793417] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.108 [2024-12-11 15:02:22.793435] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.108 [2024-12-11 15:02:22.793447] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.108 [2024-12-11 15:02:22.793458] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.108 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 779127 Killed "${NVMF_APP[@]}" "$@" 00:25:40.108 15:02:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:25:40.108 15:02:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:25:40.108 15:02:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:40.108 15:02:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:40.108 15:02:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:40.108 15:02:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=780084 00:25:40.108 15:02:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:40.108 15:02:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 780084 00:25:40.108 15:02:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 780084 ']' 00:25:40.108 15:02:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:40.108 15:02:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:40.108 15:02:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:40.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:40.108 15:02:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:40.108 15:02:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:40.108 [2024-12-11 15:02:22.806073] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.108 [2024-12-11 15:02:22.806404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.108 [2024-12-11 15:02:22.806433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:40.108 [2024-12-11 15:02:22.806448] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:40.108 [2024-12-11 15:02:22.806689] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:40.108 [2024-12-11 15:02:22.806923] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.108 [2024-12-11 15:02:22.806943] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.108 [2024-12-11 15:02:22.806955] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.108 [2024-12-11 15:02:22.806967] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.108 [2024-12-11 15:02:22.819609] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.108 [2024-12-11 15:02:22.820000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.108 [2024-12-11 15:02:22.820027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:40.108 [2024-12-11 15:02:22.820043] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:40.108 [2024-12-11 15:02:22.820281] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:40.108 [2024-12-11 15:02:22.820481] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.108 [2024-12-11 15:02:22.820500] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.108 [2024-12-11 15:02:22.820513] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.108 [2024-12-11 15:02:22.820539] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.108 [2024-12-11 15:02:22.832912] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.108 [2024-12-11 15:02:22.833288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.108 [2024-12-11 15:02:22.833318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:40.108 [2024-12-11 15:02:22.833334] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:40.108 [2024-12-11 15:02:22.833561] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:40.108 [2024-12-11 15:02:22.833782] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.108 [2024-12-11 15:02:22.833802] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.108 [2024-12-11 15:02:22.833816] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.108 [2024-12-11 15:02:22.833829] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.108 [2024-12-11 15:02:22.846360] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.108 [2024-12-11 15:02:22.846725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.109 [2024-12-11 15:02:22.846754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:40.109 [2024-12-11 15:02:22.846771] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:40.109 [2024-12-11 15:02:22.847004] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:40.109 [2024-12-11 15:02:22.847204] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.109 [2024-12-11 15:02:22.847223] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.109 [2024-12-11 15:02:22.847242] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.109 [2024-12-11 15:02:22.847255] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.109 [2024-12-11 15:02:22.847361] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:25:40.109 [2024-12-11 15:02:22.847419] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:40.109 [2024-12-11 15:02:22.859968] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.109 [2024-12-11 15:02:22.860309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.109 [2024-12-11 15:02:22.860336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:40.109 [2024-12-11 15:02:22.860351] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:40.109 [2024-12-11 15:02:22.860581] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:40.109 [2024-12-11 15:02:22.860803] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.109 [2024-12-11 15:02:22.860837] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.109 [2024-12-11 15:02:22.860850] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.109 [2024-12-11 15:02:22.860863] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.109 [2024-12-11 15:02:22.873497] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.109 [2024-12-11 15:02:22.873919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.109 [2024-12-11 15:02:22.873962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:40.109 [2024-12-11 15:02:22.873978] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:40.109 [2024-12-11 15:02:22.874215] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:40.109 [2024-12-11 15:02:22.874453] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.109 [2024-12-11 15:02:22.874490] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.109 [2024-12-11 15:02:22.874503] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.109 [2024-12-11 15:02:22.874516] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.368 [2024-12-11 15:02:22.886894] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.368 [2024-12-11 15:02:22.887254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.368 [2024-12-11 15:02:22.887283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:40.368 [2024-12-11 15:02:22.887299] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:40.368 [2024-12-11 15:02:22.887523] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:40.368 [2024-12-11 15:02:22.887762] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.368 [2024-12-11 15:02:22.887788] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.368 [2024-12-11 15:02:22.887802] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.368 [2024-12-11 15:02:22.887815] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.368 [2024-12-11 15:02:22.900184] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.368 [2024-12-11 15:02:22.900671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.369 [2024-12-11 15:02:22.900700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:40.369 [2024-12-11 15:02:22.900716] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:40.369 [2024-12-11 15:02:22.900971] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:40.369 [2024-12-11 15:02:22.901165] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.369 [2024-12-11 15:02:22.901183] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.369 [2024-12-11 15:02:22.901195] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.369 [2024-12-11 15:02:22.901206] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.369 [2024-12-11 15:02:22.913473] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.369 [2024-12-11 15:02:22.913794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.369 [2024-12-11 15:02:22.913839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:40.369 [2024-12-11 15:02:22.913854] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:40.369 [2024-12-11 15:02:22.914086] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:40.369 [2024-12-11 15:02:22.914295] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.369 [2024-12-11 15:02:22.914313] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.369 [2024-12-11 15:02:22.914325] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.369 [2024-12-11 15:02:22.914336] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.369 [2024-12-11 15:02:22.922804] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:40.369 [2024-12-11 15:02:22.926767] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.369 [2024-12-11 15:02:22.927142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.369 [2024-12-11 15:02:22.927170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:40.369 [2024-12-11 15:02:22.927185] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:40.369 [2024-12-11 15:02:22.927423] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:40.369 [2024-12-11 15:02:22.927660] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.369 [2024-12-11 15:02:22.927680] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.369 [2024-12-11 15:02:22.927693] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.369 [2024-12-11 15:02:22.927709] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.369 [2024-12-11 15:02:22.940153] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.369 [2024-12-11 15:02:22.940761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.369 [2024-12-11 15:02:22.940801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:40.369 [2024-12-11 15:02:22.940822] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:40.369 [2024-12-11 15:02:22.941092] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:40.369 [2024-12-11 15:02:22.941293] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.369 [2024-12-11 15:02:22.941312] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.369 [2024-12-11 15:02:22.941327] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.369 [2024-12-11 15:02:22.941342] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.369 [2024-12-11 15:02:22.953522] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.369 [2024-12-11 15:02:22.953911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.369 [2024-12-11 15:02:22.953940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:40.369 [2024-12-11 15:02:22.953956] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:40.369 [2024-12-11 15:02:22.954202] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:40.369 [2024-12-11 15:02:22.954412] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.369 [2024-12-11 15:02:22.954431] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.369 [2024-12-11 15:02:22.954443] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.369 [2024-12-11 15:02:22.954454] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.369 [2024-12-11 15:02:22.966802] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.369 [2024-12-11 15:02:22.967177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.369 [2024-12-11 15:02:22.967207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:40.369 [2024-12-11 15:02:22.967224] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:40.369 [2024-12-11 15:02:22.967465] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:40.369 [2024-12-11 15:02:22.967708] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.369 [2024-12-11 15:02:22.967729] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.369 [2024-12-11 15:02:22.967742] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.369 [2024-12-11 15:02:22.967754] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.369 [2024-12-11 15:02:22.979961] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:40.369 [2024-12-11 15:02:22.979999] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:40.369 [2024-12-11 15:02:22.980019] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting contro[2024-12-11 15:02:22.980022] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is theller 00:25:40.369 only 00:25:40.369 [2024-12-11 15:02:22.980045] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:40.369 [2024-12-11 15:02:22.980075] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:40.369 [2024-12-11 15:02:22.980432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.369 [2024-12-11 15:02:22.980460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:40.369 [2024-12-11 15:02:22.980476] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:40.369 [2024-12-11 15:02:22.980735] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:40.369 [2024-12-11 15:02:22.980966] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.369 [2024-12-11 15:02:22.980985] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.369 [2024-12-11 15:02:22.980998] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.369 [2024-12-11 15:02:22.981010] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.369 [2024-12-11 15:02:22.981621] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:25:40.369 [2024-12-11 15:02:22.981682] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:25:40.369 [2024-12-11 15:02:22.981685] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:25:40.369 [2024-12-11 15:02:22.993511] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.369 [2024-12-11 15:02:22.994099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.370 [2024-12-11 15:02:22.994139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:40.370 [2024-12-11 15:02:22.994160] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:40.370 [2024-12-11 15:02:22.994415] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:40.370 [2024-12-11 15:02:22.994657] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.370 [2024-12-11 15:02:22.994680] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.370 [2024-12-11 15:02:22.994697] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.370 [2024-12-11 15:02:22.994713] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.370 [2024-12-11 15:02:23.007078] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.370 [2024-12-11 15:02:23.007604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.370 [2024-12-11 15:02:23.007646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:40.370 [2024-12-11 15:02:23.007668] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:40.370 [2024-12-11 15:02:23.007909] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:40.370 [2024-12-11 15:02:23.008122] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.370 [2024-12-11 15:02:23.008153] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.370 [2024-12-11 15:02:23.008170] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.370 [2024-12-11 15:02:23.008186] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.370 [2024-12-11 15:02:23.020491] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.370 [2024-12-11 15:02:23.021153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.370 [2024-12-11 15:02:23.021193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:40.370 [2024-12-11 15:02:23.021215] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:40.370 [2024-12-11 15:02:23.021455] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:40.370 [2024-12-11 15:02:23.021714] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.370 [2024-12-11 15:02:23.021737] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.370 [2024-12-11 15:02:23.021755] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.370 [2024-12-11 15:02:23.021771] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.370 [2024-12-11 15:02:23.034175] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.370 [2024-12-11 15:02:23.034674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.370 [2024-12-11 15:02:23.034715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:40.370 [2024-12-11 15:02:23.034736] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:40.370 [2024-12-11 15:02:23.034975] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:40.370 [2024-12-11 15:02:23.035187] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.370 [2024-12-11 15:02:23.035208] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.370 [2024-12-11 15:02:23.035224] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.370 [2024-12-11 15:02:23.035239] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.370 [2024-12-11 15:02:23.047766] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.370 [2024-12-11 15:02:23.048347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.370 [2024-12-11 15:02:23.048387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:40.370 [2024-12-11 15:02:23.048409] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:40.370 [2024-12-11 15:02:23.048669] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:40.370 [2024-12-11 15:02:23.048903] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.370 [2024-12-11 15:02:23.048924] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.370 [2024-12-11 15:02:23.048941] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.370 [2024-12-11 15:02:23.048968] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.370 [2024-12-11 15:02:23.061399] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.370 [2024-12-11 15:02:23.061941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.370 [2024-12-11 15:02:23.061980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:40.370 [2024-12-11 15:02:23.062001] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:40.370 [2024-12-11 15:02:23.062253] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:40.370 [2024-12-11 15:02:23.062466] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.370 [2024-12-11 15:02:23.062486] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.370 [2024-12-11 15:02:23.062504] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.370 [2024-12-11 15:02:23.062520] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.370 [2024-12-11 15:02:23.075051] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.370 [2024-12-11 15:02:23.075385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.370 [2024-12-11 15:02:23.075415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:40.370 [2024-12-11 15:02:23.075431] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:40.370 [2024-12-11 15:02:23.075674] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:40.370 [2024-12-11 15:02:23.075902] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.370 [2024-12-11 15:02:23.075922] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.370 [2024-12-11 15:02:23.075935] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.370 [2024-12-11 15:02:23.075947] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.370 [2024-12-11 15:02:23.088510] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.370 [2024-12-11 15:02:23.088885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.370 [2024-12-11 15:02:23.088914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:40.370 [2024-12-11 15:02:23.088930] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:40.370 [2024-12-11 15:02:23.089146] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:40.370 [2024-12-11 15:02:23.089392] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.370 [2024-12-11 15:02:23.089413] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.370 [2024-12-11 15:02:23.089427] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.370 [2024-12-11 15:02:23.089440] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.370 [2024-12-11 15:02:23.102127] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.370 [2024-12-11 15:02:23.102454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.370 [2024-12-11 15:02:23.102492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:40.370 [2024-12-11 15:02:23.102509] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:40.370 [2024-12-11 15:02:23.102734] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:40.370 [2024-12-11 15:02:23.102963] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.370 [2024-12-11 15:02:23.102983] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.370 [2024-12-11 15:02:23.102996] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.370 [2024-12-11 15:02:23.103008] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.370 [2024-12-11 15:02:23.115744] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.370 [2024-12-11 15:02:23.116087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.370 [2024-12-11 15:02:23.116115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:40.370 [2024-12-11 15:02:23.116132] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:40.370 [2024-12-11 15:02:23.116348] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:40.370 [2024-12-11 15:02:23.116578] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.370 [2024-12-11 15:02:23.116599] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.370 [2024-12-11 15:02:23.116613] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.370 [2024-12-11 15:02:23.116625] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.370 15:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:40.370 15:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:25:40.370 15:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:40.371 15:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:40.371 15:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:40.371 [2024-12-11 15:02:23.129269] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.371 [2024-12-11 15:02:23.129616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.371 [2024-12-11 15:02:23.129646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:40.371 [2024-12-11 15:02:23.129662] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:40.371 [2024-12-11 15:02:23.129901] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:40.371 [2024-12-11 15:02:23.130125] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.371 [2024-12-11 15:02:23.130145] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.371 [2024-12-11 15:02:23.130158] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.371 [2024-12-11 15:02:23.130170] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.629 [2024-12-11 15:02:23.142945] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.629 [2024-12-11 15:02:23.143291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.629 [2024-12-11 15:02:23.143322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:40.629 [2024-12-11 15:02:23.143345] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:40.629 [2024-12-11 15:02:23.143597] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:40.629 [2024-12-11 15:02:23.143819] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.629 [2024-12-11 15:02:23.143854] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.629 [2024-12-11 15:02:23.143868] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.629 [2024-12-11 15:02:23.143881] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.629 15:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:40.629 15:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:40.629 15:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.629 15:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:40.629 [2024-12-11 15:02:23.154613] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:40.630 [2024-12-11 15:02:23.156439] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.630 [2024-12-11 15:02:23.156770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.630 [2024-12-11 15:02:23.156799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:40.630 [2024-12-11 15:02:23.156816] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:40.630 [2024-12-11 15:02:23.157046] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:40.630 [2024-12-11 15:02:23.157275] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.630 [2024-12-11 15:02:23.157295] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.630 [2024-12-11 15:02:23.157307] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.630 [2024-12-11 15:02:23.157319] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.630 15:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.630 15:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:40.630 15:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.630 15:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:40.630 [2024-12-11 15:02:23.170026] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.630 [2024-12-11 15:02:23.170463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.630 [2024-12-11 15:02:23.170496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:40.630 [2024-12-11 15:02:23.170514] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:40.630 [2024-12-11 15:02:23.170744] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:40.630 [2024-12-11 15:02:23.170999] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.630 [2024-12-11 15:02:23.171020] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.630 [2024-12-11 15:02:23.171034] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.630 [2024-12-11 15:02:23.171048] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.630 [2024-12-11 15:02:23.183410] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.630 [2024-12-11 15:02:23.183761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.630 [2024-12-11 15:02:23.183790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:40.630 [2024-12-11 15:02:23.183806] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:40.630 [2024-12-11 15:02:23.184039] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:40.630 [2024-12-11 15:02:23.184261] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.630 [2024-12-11 15:02:23.184281] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.630 [2024-12-11 15:02:23.184294] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.630 [2024-12-11 15:02:23.184306] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.630 [2024-12-11 15:02:23.196899] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.630 [2024-12-11 15:02:23.197315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.630 [2024-12-11 15:02:23.197344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:40.630 [2024-12-11 15:02:23.197361] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:40.630 [2024-12-11 15:02:23.197603] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:40.630 [2024-12-11 15:02:23.197817] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.630 [2024-12-11 15:02:23.197837] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.630 [2024-12-11 15:02:23.197851] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.630 [2024-12-11 15:02:23.197865] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.630 Malloc0 00:25:40.630 15:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.630 15:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:40.630 15:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.630 15:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:40.630 [2024-12-11 15:02:23.210648] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.630 [2024-12-11 15:02:23.211109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.630 [2024-12-11 15:02:23.211138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235eee0 with addr=10.0.0.2, port=4420 00:25:40.630 [2024-12-11 15:02:23.211155] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235eee0 is same with the state(6) to be set 00:25:40.630 [2024-12-11 15:02:23.211409] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235eee0 (9): Bad file descriptor 00:25:40.630 [2024-12-11 15:02:23.211664] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:40.630 [2024-12-11 15:02:23.211686] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:40.630 [2024-12-11 15:02:23.211701] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:40.630 [2024-12-11 15:02:23.211714] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:40.630 15:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.630 15:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:40.630 15:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.630 15:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:40.630 15:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.630 15:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:40.630 15:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.630 15:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:40.630 [2024-12-11 15:02:23.224285] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:40.630 [2024-12-11 15:02:23.224746] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:40.630 15:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.630 15:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 779409 00:25:40.630 [2024-12-11 15:02:23.255816] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:25:41.602 3621.00 IOPS, 14.14 MiB/s [2024-12-11T14:02:25.749Z] 4304.29 IOPS, 16.81 MiB/s [2024-12-11T14:02:26.683Z] 4817.50 IOPS, 18.82 MiB/s [2024-12-11T14:02:27.618Z] 5221.67 IOPS, 20.40 MiB/s [2024-12-11T14:02:28.552Z] 5551.50 IOPS, 21.69 MiB/s [2024-12-11T14:02:29.485Z] 5808.73 IOPS, 22.69 MiB/s [2024-12-11T14:02:30.418Z] 6024.33 IOPS, 23.53 MiB/s [2024-12-11T14:02:31.792Z] 6211.46 IOPS, 24.26 MiB/s [2024-12-11T14:02:32.727Z] 6376.21 IOPS, 24.91 MiB/s [2024-12-11T14:02:32.727Z] 6509.27 IOPS, 25.43 MiB/s 00:25:49.954 Latency(us) 00:25:49.954 [2024-12-11T14:02:32.727Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:49.954 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:49.954 Verification LBA range: start 0x0 length 0x4000 00:25:49.954 Nvme1n1 : 15.05 6495.33 25.37 10050.23 0.00 7693.22 837.40 43496.49 00:25:49.954 [2024-12-11T14:02:32.727Z] =================================================================================================================== 00:25:49.954 [2024-12-11T14:02:32.727Z] Total : 6495.33 25.37 10050.23 0.00 7693.22 837.40 43496.49 00:25:49.954 15:02:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:25:49.954 15:02:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:49.954 15:02:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.954 15:02:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:49.954 15:02:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.954 15:02:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:25:49.954 15:02:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:25:49.954 15:02:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:49.954 15:02:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:25:49.954 15:02:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:49.954 15:02:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:25:49.954 15:02:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:49.954 15:02:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:49.954 rmmod nvme_tcp 00:25:49.954 rmmod nvme_fabrics 00:25:49.954 rmmod nvme_keyring 00:25:49.954 15:02:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:49.954 15:02:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:25:49.954 15:02:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:25:49.954 15:02:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 780084 ']' 00:25:49.954 15:02:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 780084 00:25:49.954 15:02:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 780084 ']' 00:25:49.954 15:02:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 780084 00:25:49.954 15:02:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:25:49.954 15:02:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:49.954 15:02:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 780084 00:25:50.212 15:02:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:50.212 15:02:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:50.212 15:02:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 780084' 00:25:50.212 killing process with pid 780084 00:25:50.212 15:02:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 780084 00:25:50.212 15:02:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 780084 00:25:50.212 15:02:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:50.212 15:02:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:50.212 15:02:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:50.212 15:02:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:25:50.212 15:02:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:25:50.212 15:02:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:50.212 15:02:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:25:50.212 15:02:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:50.212 15:02:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:50.212 15:02:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:50.212 15:02:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:50.212 15:02:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:52.750 15:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:52.750 00:25:52.750 real 0m22.745s 00:25:52.750 user 1m1.030s 00:25:52.750 sys 0m4.179s 00:25:52.750 15:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:52.750 15:02:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:52.750 ************************************ 00:25:52.750 END TEST nvmf_bdevperf 00:25:52.750 ************************************ 00:25:52.750 15:02:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:25:52.750 15:02:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:52.750 15:02:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:52.750 15:02:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.750 ************************************ 00:25:52.750 START TEST nvmf_target_disconnect 00:25:52.750 ************************************ 00:25:52.750 15:02:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:25:52.750 * Looking for test storage... 00:25:52.750 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:52.750 15:02:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:52.750 15:02:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:25:52.750 15:02:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:52.750 15:02:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:52.750 15:02:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:52.750 15:02:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:52.750 15:02:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:52.750 15:02:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:25:52.750 15:02:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:25:52.750 15:02:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:25:52.750 15:02:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:25:52.750 15:02:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:25:52.750 15:02:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:25:52.750 15:02:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:25:52.750 15:02:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:52.750 15:02:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:25:52.750 15:02:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:25:52.750 15:02:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:52.750 15:02:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:52.750 15:02:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:25:52.750 15:02:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:25:52.750 15:02:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:52.750 15:02:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:25:52.750 15:02:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:25:52.750 15:02:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:25:52.750 15:02:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:25:52.750 15:02:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:52.750 15:02:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:25:52.750 15:02:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:25:52.750 15:02:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:52.750 15:02:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:52.750 15:02:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:25:52.750 15:02:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:52.750 15:02:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:52.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:52.750 --rc genhtml_branch_coverage=1 00:25:52.750 --rc genhtml_function_coverage=1 00:25:52.750 --rc genhtml_legend=1 00:25:52.750 --rc geninfo_all_blocks=1 00:25:52.750 --rc geninfo_unexecuted_blocks=1 00:25:52.750 00:25:52.750 ' 00:25:52.750 15:02:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:52.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:52.750 --rc genhtml_branch_coverage=1 00:25:52.750 --rc genhtml_function_coverage=1 00:25:52.750 --rc genhtml_legend=1 00:25:52.750 --rc geninfo_all_blocks=1 00:25:52.750 --rc geninfo_unexecuted_blocks=1 00:25:52.750 00:25:52.750 ' 00:25:52.750 15:02:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:52.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:52.750 --rc genhtml_branch_coverage=1 00:25:52.750 --rc genhtml_function_coverage=1 00:25:52.750 --rc genhtml_legend=1 00:25:52.750 --rc geninfo_all_blocks=1 00:25:52.750 --rc geninfo_unexecuted_blocks=1 00:25:52.750 00:25:52.750 ' 00:25:52.750 15:02:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:52.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:52.750 --rc genhtml_branch_coverage=1 00:25:52.750 --rc genhtml_function_coverage=1 00:25:52.750 --rc genhtml_legend=1 00:25:52.750 --rc geninfo_all_blocks=1 00:25:52.750 --rc geninfo_unexecuted_blocks=1 00:25:52.750 00:25:52.750 ' 00:25:52.750 15:02:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:52.750 15:02:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:25:52.750 15:02:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:52.750 15:02:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:52.750 15:02:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:52.750 15:02:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:52.750 15:02:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:52.750 15:02:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:52.750 15:02:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:52.750 15:02:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:52.750 15:02:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:52.750 15:02:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:52.751 15:02:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:52.751 15:02:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:52.751 15:02:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:52.751 15:02:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:52.751 15:02:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:52.751 15:02:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:52.751 15:02:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:52.751 15:02:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:25:52.751 15:02:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:52.751 15:02:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:52.751 15:02:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:52.751 15:02:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:52.751 15:02:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:52.751 15:02:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:52.751 15:02:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:25:52.751 15:02:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:52.751 15:02:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:25:52.751 15:02:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:52.751 15:02:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:52.751 15:02:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:52.751 15:02:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:52.751 15:02:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:52.751 15:02:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:52.751 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:52.751 15:02:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:52.751 15:02:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:52.751 15:02:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:52.751 15:02:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:25:52.751 15:02:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:25:52.751 15:02:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:25:52.751 15:02:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:25:52.751 15:02:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:52.751 15:02:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:52.751 15:02:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:52.751 15:02:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:52.751 15:02:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:52.751 15:02:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:52.751 15:02:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:52.751 15:02:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:52.751 15:02:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:52.751 15:02:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:52.751 15:02:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:25:52.751 15:02:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:54.656 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:54.656 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:25:54.656 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:54.656 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:54.656 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:54.656 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:54.656 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:54.656 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:25:54.656 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:54.656 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:25:54.656 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:25:54.656 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:25:54.656 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:25:54.656 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:25:54.656 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:25:54.656 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:54.656 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:54.656 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:54.656 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:54.656 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:54.656 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:54.656 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:54.656 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:54.656 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:54.656 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:54.656 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:54.656 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:54.656 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:54.656 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:54.656 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:54.656 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:54.656 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:54.656 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:54.656 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:54.656 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:54.656 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:54.656 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:54.656 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:54.656 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:54.656 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:54.656 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:54.656 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:54.656 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:54.656 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:54.656 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:54.656 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:54.656 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:54.656 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:54.656 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:54.656 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:54.656 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:54.656 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:54.656 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:54.656 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:54.656 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:54.656 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:54.656 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:54.656 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:54.656 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:54.656 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:54.656 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:54.656 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:54.656 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:54.656 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:54.656 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:54.656 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:54.656 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:54.656 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:54.656 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:54.656 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:54.656 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:54.656 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:54.656 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:54.656 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:25:54.656 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:54.656 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:54.656 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:54.656 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:54.656 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:54.656 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:54.656 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:54.656 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:54.656 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:54.656 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:54.656 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:54.656 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:54.656 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:54.656 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:54.656 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:54.656 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:54.656 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:54.656 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:54.656 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:54.656 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:54.656 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:54.656 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:54.656 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:54.656 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:54.656 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:54.915 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:54.915 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:54.915 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.361 ms 00:25:54.915 00:25:54.915 --- 10.0.0.2 ping statistics --- 00:25:54.915 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:54.915 rtt min/avg/max/mdev = 0.361/0.361/0.361/0.000 ms 00:25:54.916 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:54.916 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:54.916 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:25:54.916 00:25:54.916 --- 10.0.0.1 ping statistics --- 00:25:54.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:54.916 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:25:54.916 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:54.916 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:25:54.916 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:54.916 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:54.916 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:54.916 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:54.916 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:54.916 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:54.916 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:54.916 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:25:54.916 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:54.916 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:54.916 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:54.916 ************************************ 00:25:54.916 START TEST nvmf_target_disconnect_tc1 00:25:54.916 ************************************ 00:25:54.916 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:25:54.916 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:54.916 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:25:54.916 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:54.916 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:25:54.916 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:54.916 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:25:54.916 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:54.916 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:25:54.916 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:54.916 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:25:54.916 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:25:54.916 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:54.916 [2024-12-11 15:02:37.575995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:54.916 [2024-12-11 15:02:37.576061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1237f40 with addr=10.0.0.2, port=4420 00:25:54.916 [2024-12-11 15:02:37.576100] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:54.916 [2024-12-11 15:02:37.576131] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:54.916 [2024-12-11 15:02:37.576161] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:25:54.916 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:25:54.916 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:25:54.916 Initializing NVMe Controllers 00:25:54.916 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:25:54.916 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:54.916 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:54.916 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:54.916 00:25:54.916 real 0m0.098s 00:25:54.916 user 0m0.047s 00:25:54.916 sys 0m0.050s 00:25:54.916 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:54.916 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:54.916 ************************************ 00:25:54.916 END TEST nvmf_target_disconnect_tc1 00:25:54.916 ************************************ 00:25:54.916 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:25:54.916 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:54.916 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:54.916 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:54.916 ************************************ 00:25:54.916 START TEST nvmf_target_disconnect_tc2 00:25:54.916 ************************************ 00:25:54.916 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:25:54.916 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:25:54.916 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:25:54.916 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:54.916 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:54.916 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:54.916 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=783325 00:25:54.916 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:25:54.916 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 783325 00:25:54.916 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 783325 ']' 00:25:54.916 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:54.916 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:54.916 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:54.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:54.916 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:54.916 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:55.175 [2024-12-11 15:02:37.690660] Starting SPDK v25.01-pre git sha1 3aefe4228 / DPDK 24.03.0 initialization... 00:25:55.175 [2024-12-11 15:02:37.690768] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:55.175 [2024-12-11 15:02:37.764227] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:55.175 [2024-12-11 15:02:37.825414] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:55.175 [2024-12-11 15:02:37.825475] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:55.175 [2024-12-11 15:02:37.825502] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:55.175 [2024-12-11 15:02:37.825513] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:55.175 [2024-12-11 15:02:37.825523] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:55.175 [2024-12-11 15:02:37.827174] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:25:55.175 [2024-12-11 15:02:37.827236] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:25:55.175 [2024-12-11 15:02:37.827302] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7 00:25:55.175 [2024-12-11 15:02:37.827304] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:25:55.433 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:55.433 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:25:55.433 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:55.433 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:55.433 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:55.433 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:55.433 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:55.433 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.433 15:02:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:55.433 Malloc0 00:25:55.433 15:02:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.433 15:02:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:25:55.433 15:02:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.433 15:02:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:55.433 [2024-12-11 15:02:38.021524] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:55.433 15:02:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.433 15:02:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:55.433 15:02:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.433 15:02:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:55.433 15:02:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.433 15:02:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:55.433 15:02:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.433 15:02:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:55.433 15:02:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.434 15:02:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:55.434 15:02:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.434 15:02:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:55.434 [2024-12-11 15:02:38.049865] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:55.434 15:02:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.434 15:02:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:55.434 15:02:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.434 15:02:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:55.434 15:02:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.434 15:02:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=783383 00:25:55.434 15:02:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:25:55.434 15:02:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:57.339 15:02:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 783325 00:25:57.339 15:02:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:25:57.339 Write completed with error (sct=0, sc=8) 00:25:57.339 starting I/O failed 00:25:57.339 Write completed with error (sct=0, sc=8) 00:25:57.339 starting I/O failed 00:25:57.339 Read completed with error (sct=0, sc=8) 00:25:57.339 starting I/O failed 00:25:57.339 Write completed with error (sct=0, sc=8) 00:25:57.339 starting I/O failed 00:25:57.339 Read completed with error (sct=0, sc=8) 00:25:57.339 starting I/O failed 00:25:57.339 Read completed with error (sct=0, sc=8) 00:25:57.339 starting I/O failed 00:25:57.339 Read completed with error (sct=0, sc=8) 00:25:57.339 starting I/O failed 00:25:57.339 Read completed with error (sct=0, sc=8) 00:25:57.339 starting I/O failed 00:25:57.339 Write completed with error (sct=0, sc=8) 00:25:57.339 starting I/O failed 00:25:57.339 Read completed with error (sct=0, sc=8) 00:25:57.339 starting I/O failed 00:25:57.339 Write completed with error (sct=0, sc=8) 00:25:57.339 starting I/O failed 00:25:57.339 Read completed with error (sct=0, sc=8) 00:25:57.339 starting I/O failed 00:25:57.339 Read completed with error (sct=0, sc=8) 00:25:57.339 starting I/O failed 00:25:57.339 Write completed with error (sct=0, sc=8) 00:25:57.339 starting I/O failed 00:25:57.339 Write completed with error (sct=0, sc=8) 00:25:57.339 starting I/O failed 00:25:57.339 Read completed with error (sct=0, sc=8) 00:25:57.339 starting I/O failed 00:25:57.339 Read completed with error (sct=0, sc=8) 00:25:57.339 starting I/O failed 00:25:57.339 Write completed with error (sct=0, sc=8) 00:25:57.339 starting I/O failed 00:25:57.339 Read completed with error (sct=0, sc=8) 00:25:57.339 starting I/O failed 00:25:57.339 Read completed with error (sct=0, sc=8) 00:25:57.339 starting I/O failed 00:25:57.339 Read completed with error (sct=0, sc=8) 00:25:57.339 starting I/O failed 00:25:57.339 Read completed with error (sct=0, sc=8) 00:25:57.339 starting I/O failed 00:25:57.339 Read completed with error (sct=0, sc=8) 00:25:57.339 starting I/O failed 00:25:57.339 Write completed with error (sct=0, sc=8) 00:25:57.339 starting I/O failed 00:25:57.339 Write completed with error (sct=0, sc=8) 00:25:57.339 starting I/O failed 00:25:57.339 Read completed with error (sct=0, sc=8) 00:25:57.339 starting I/O failed 00:25:57.339 Read completed with error (sct=0, sc=8) 00:25:57.339 starting I/O failed 00:25:57.339 Write completed with error (sct=0, sc=8) 00:25:57.339 starting I/O failed 00:25:57.339 Write completed with error (sct=0, sc=8) 00:25:57.339 starting I/O failed 00:25:57.339 Write completed with error (sct=0, sc=8) 00:25:57.339 starting I/O failed 00:25:57.339 Write completed with error (sct=0, sc=8) 00:25:57.339 starting I/O failed 00:25:57.339 Write completed with error (sct=0, sc=8) 00:25:57.339 starting I/O failed 00:25:57.339 Read completed with error (sct=0, sc=8) 00:25:57.339 [2024-12-11 15:02:40.077454] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:57.339 starting I/O failed 00:25:57.339 Read completed with error (sct=0, sc=8) 00:25:57.339 starting I/O failed 00:25:57.339 Read completed with error (sct=0, sc=8) 00:25:57.339 starting I/O failed 00:25:57.339 Read completed with error (sct=0, sc=8) 00:25:57.339 starting I/O failed 00:25:57.339 Read completed with error (sct=0, sc=8) 00:25:57.339 starting I/O failed 00:25:57.339 Read completed with error (sct=0, sc=8) 00:25:57.339 starting I/O failed 00:25:57.339 Read completed with error (sct=0, sc=8) 00:25:57.339 starting I/O failed 00:25:57.339 Read completed with error (sct=0, sc=8) 00:25:57.339 starting I/O failed 00:25:57.339 Read completed with error (sct=0, sc=8) 00:25:57.339 starting I/O failed 00:25:57.339 Read completed with error (sct=0, sc=8) 00:25:57.339 starting I/O failed 00:25:57.339 Read completed with error (sct=0, sc=8) 00:25:57.339 starting I/O failed 00:25:57.339 Read completed with error (sct=0, sc=8) 00:25:57.339 starting I/O failed 00:25:57.339 Read completed with error (sct=0, sc=8) 00:25:57.339 starting I/O failed 00:25:57.339 Read completed with error (sct=0, sc=8) 00:25:57.339 starting I/O failed 00:25:57.339 Read completed with error (sct=0, sc=8) 00:25:57.339 starting I/O failed 00:25:57.339 Read completed with error (sct=0, sc=8) 00:25:57.339 starting I/O failed 00:25:57.339 Read completed with error (sct=0, sc=8) 00:25:57.339 starting I/O failed 00:25:57.339 Read completed with error (sct=0, sc=8) 00:25:57.339 starting I/O failed 00:25:57.339 Write completed with error (sct=0, sc=8) 00:25:57.339 starting I/O failed 00:25:57.339 Read completed with error (sct=0, sc=8) 00:25:57.339 starting I/O failed 00:25:57.339 Write completed with error (sct=0, sc=8) 00:25:57.339 starting I/O failed 00:25:57.340 Write completed with error (sct=0, sc=8) 00:25:57.340 starting I/O failed 00:25:57.340 Read completed with error (sct=0, sc=8) 00:25:57.340 starting I/O failed 00:25:57.340 Read completed with error (sct=0, sc=8) 00:25:57.340 starting I/O failed 00:25:57.340 Write completed with error (sct=0, sc=8) 00:25:57.340 starting I/O failed 00:25:57.340 Read completed with error (sct=0, sc=8) 00:25:57.340 starting I/O failed 00:25:57.340 Write completed with error (sct=0, sc=8) 00:25:57.340 starting I/O failed 00:25:57.340 Write completed with error (sct=0, sc=8) 00:25:57.340 starting I/O failed 00:25:57.340 Read completed with error (sct=0, sc=8) 00:25:57.340 starting I/O failed 00:25:57.340 Read completed with error (sct=0, sc=8) 00:25:57.340 starting I/O failed 00:25:57.340 Write completed with error (sct=0, sc=8) 00:25:57.340 starting I/O failed 00:25:57.340 Write completed with error (sct=0, sc=8) 00:25:57.340 starting I/O failed 00:25:57.340 [2024-12-11 15:02:40.077822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:57.340 Read completed with error (sct=0, sc=8) 00:25:57.340 starting I/O failed 00:25:57.340 Read completed with error (sct=0, sc=8) 00:25:57.340 starting I/O failed 00:25:57.340 Read completed with error (sct=0, sc=8) 00:25:57.340 starting I/O failed 00:25:57.340 Read completed with error (sct=0, sc=8) 00:25:57.340 starting I/O failed 00:25:57.340 Read completed with error (sct=0, sc=8) 00:25:57.340 starting I/O failed 00:25:57.340 Read completed with error (sct=0, sc=8) 00:25:57.340 starting I/O failed 00:25:57.340 Read completed with error (sct=0, sc=8) 00:25:57.340 starting I/O failed 00:25:57.340 Read completed with error (sct=0, sc=8) 00:25:57.340 starting I/O failed 00:25:57.340 Read completed with error (sct=0, sc=8) 00:25:57.340 starting I/O failed 00:25:57.340 Read completed with error (sct=0, sc=8) 00:25:57.340 starting I/O failed 00:25:57.340 Write completed with error (sct=0, sc=8) 00:25:57.340 starting I/O failed 00:25:57.340 Write completed with error (sct=0, sc=8) 00:25:57.340 starting I/O failed 00:25:57.340 Read completed with error (sct=0, sc=8) 00:25:57.340 starting I/O failed 00:25:57.340 Read completed with error (sct=0, sc=8) 00:25:57.340 starting I/O failed 00:25:57.340 Read completed with error (sct=0, sc=8) 00:25:57.340 starting I/O failed 00:25:57.340 Write completed with error (sct=0, sc=8) 00:25:57.340 starting I/O failed 00:25:57.340 Read completed with error (sct=0, sc=8) 00:25:57.340 starting I/O failed 00:25:57.340 Write completed with error (sct=0, sc=8) 00:25:57.340 starting I/O failed 00:25:57.340 Write completed with error (sct=0, sc=8) 00:25:57.340 starting I/O failed 00:25:57.340 Read completed with error (sct=0, sc=8) 00:25:57.340 starting I/O failed 00:25:57.340 Read completed with error (sct=0, sc=8) 00:25:57.340 starting I/O failed 00:25:57.340 Write completed with error (sct=0, sc=8) 00:25:57.340 starting I/O failed 00:25:57.340 Read completed with error (sct=0, sc=8) 00:25:57.340 starting I/O failed 00:25:57.340 Read completed with error (sct=0, sc=8) 00:25:57.340 starting I/O failed 00:25:57.340 Read completed with error (sct=0, sc=8) 00:25:57.340 starting I/O failed 00:25:57.340 Read completed with error (sct=0, sc=8) 00:25:57.340 starting I/O failed 00:25:57.340 Read completed with error (sct=0, sc=8) 00:25:57.340 starting I/O failed 00:25:57.340 Write completed with error (sct=0, sc=8) 00:25:57.340 starting I/O failed 00:25:57.340 Write completed with error (sct=0, sc=8) 00:25:57.340 starting I/O failed 00:25:57.340 Write completed with error (sct=0, sc=8) 00:25:57.340 starting I/O failed 00:25:57.340 Write completed with error (sct=0, sc=8) 00:25:57.340 starting I/O failed 00:25:57.340 Write completed with error (sct=0, sc=8) 00:25:57.340 starting I/O failed 00:25:57.340 [2024-12-11 15:02:40.078180] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:57.340 [2024-12-11 15:02:40.078386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.340 [2024-12-11 15:02:40.078424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.340 qpair failed and we were unable to recover it. 00:25:57.340 [2024-12-11 15:02:40.078561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.340 [2024-12-11 15:02:40.078592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.340 qpair failed and we were unable to recover it. 00:25:57.340 [2024-12-11 15:02:40.078685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.340 [2024-12-11 15:02:40.078711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.340 qpair failed and we were unable to recover it. 00:25:57.340 [2024-12-11 15:02:40.078793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.340 [2024-12-11 15:02:40.078819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.340 qpair failed and we were unable to recover it. 00:25:57.340 [2024-12-11 15:02:40.078938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.340 [2024-12-11 15:02:40.078964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.340 qpair failed and we were unable to recover it. 00:25:57.340 [2024-12-11 15:02:40.079083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.340 [2024-12-11 15:02:40.079110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.340 qpair failed and we were unable to recover it. 00:25:57.340 [2024-12-11 15:02:40.079205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.340 [2024-12-11 15:02:40.079231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.340 qpair failed and we were unable to recover it. 00:25:57.340 [2024-12-11 15:02:40.079350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.340 [2024-12-11 15:02:40.079378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.340 qpair failed and we were unable to recover it. 00:25:57.340 [2024-12-11 15:02:40.079496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.340 [2024-12-11 15:02:40.079522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.340 qpair failed and we were unable to recover it. 00:25:57.340 [2024-12-11 15:02:40.079629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.340 [2024-12-11 15:02:40.079663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.340 qpair failed and we were unable to recover it. 00:25:57.340 [2024-12-11 15:02:40.079792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.340 [2024-12-11 15:02:40.079833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.340 qpair failed and we were unable to recover it. 00:25:57.340 [2024-12-11 15:02:40.079971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.340 [2024-12-11 15:02:40.080012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.340 qpair failed and we were unable to recover it. 00:25:57.340 [2024-12-11 15:02:40.080128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.340 [2024-12-11 15:02:40.080164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.340 qpair failed and we were unable to recover it. 00:25:57.340 [2024-12-11 15:02:40.080282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.340 [2024-12-11 15:02:40.080322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.340 qpair failed and we were unable to recover it. 00:25:57.340 [2024-12-11 15:02:40.080615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.340 [2024-12-11 15:02:40.080657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.340 qpair failed and we were unable to recover it. 00:25:57.340 [2024-12-11 15:02:40.080764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.340 [2024-12-11 15:02:40.080801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.340 qpair failed and we were unable to recover it. 00:25:57.340 [2024-12-11 15:02:40.080916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.340 [2024-12-11 15:02:40.080943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.340 qpair failed and we were unable to recover it. 00:25:57.340 [2024-12-11 15:02:40.081063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.340 [2024-12-11 15:02:40.081089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.340 qpair failed and we were unable to recover it. 00:25:57.340 [2024-12-11 15:02:40.081214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.340 [2024-12-11 15:02:40.081240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.340 qpair failed and we were unable to recover it. 00:25:57.340 [2024-12-11 15:02:40.081368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.340 [2024-12-11 15:02:40.081395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.340 qpair failed and we were unable to recover it. 00:25:57.340 [2024-12-11 15:02:40.081511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.340 [2024-12-11 15:02:40.081537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.340 qpair failed and we were unable to recover it. 00:25:57.340 [2024-12-11 15:02:40.081639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.340 [2024-12-11 15:02:40.081665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.340 qpair failed and we were unable to recover it. 00:25:57.340 [2024-12-11 15:02:40.081757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.340 [2024-12-11 15:02:40.081783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.340 qpair failed and we were unable to recover it. 00:25:57.340 [2024-12-11 15:02:40.081878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.340 [2024-12-11 15:02:40.081905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.340 qpair failed and we were unable to recover it. 00:25:57.341 [2024-12-11 15:02:40.082017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.341 [2024-12-11 15:02:40.082044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.341 qpair failed and we were unable to recover it. 00:25:57.341 [2024-12-11 15:02:40.082167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.341 [2024-12-11 15:02:40.082193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.341 qpair failed and we were unable to recover it. 00:25:57.341 [2024-12-11 15:02:40.082308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.341 [2024-12-11 15:02:40.082336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.341 qpair failed and we were unable to recover it. 00:25:57.341 [2024-12-11 15:02:40.082421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.341 [2024-12-11 15:02:40.082447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.341 qpair failed and we were unable to recover it. 00:25:57.341 [2024-12-11 15:02:40.082539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.341 [2024-12-11 15:02:40.082574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.341 qpair failed and we were unable to recover it. 00:25:57.341 [2024-12-11 15:02:40.082690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.341 [2024-12-11 15:02:40.082716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.341 qpair failed and we were unable to recover it. 00:25:57.341 [2024-12-11 15:02:40.082839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.341 [2024-12-11 15:02:40.082864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.341 qpair failed and we were unable to recover it. 00:25:57.341 [2024-12-11 15:02:40.082953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.341 [2024-12-11 15:02:40.082980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.341 qpair failed and we were unable to recover it. 00:25:57.341 [2024-12-11 15:02:40.083103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.341 [2024-12-11 15:02:40.083130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.341 qpair failed and we were unable to recover it. 00:25:57.341 [2024-12-11 15:02:40.083249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.341 [2024-12-11 15:02:40.083275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.341 qpair failed and we were unable to recover it. 00:25:57.341 [2024-12-11 15:02:40.083388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.341 [2024-12-11 15:02:40.083414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.341 qpair failed and we were unable to recover it. 00:25:57.341 [2024-12-11 15:02:40.083525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.341 [2024-12-11 15:02:40.083557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.341 qpair failed and we were unable to recover it. 00:25:57.341 [2024-12-11 15:02:40.083654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.341 [2024-12-11 15:02:40.083680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.341 qpair failed and we were unable to recover it. 00:25:57.341 [2024-12-11 15:02:40.083771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.341 [2024-12-11 15:02:40.083803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.341 qpair failed and we were unable to recover it. 00:25:57.341 [2024-12-11 15:02:40.083919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.341 [2024-12-11 15:02:40.083945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.341 qpair failed and we were unable to recover it. 00:25:57.341 [2024-12-11 15:02:40.084055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.341 [2024-12-11 15:02:40.084082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.341 qpair failed and we were unable to recover it. 00:25:57.341 [2024-12-11 15:02:40.084197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.341 [2024-12-11 15:02:40.084223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.341 qpair failed and we were unable to recover it. 00:25:57.341 [2024-12-11 15:02:40.084366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.341 [2024-12-11 15:02:40.084394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.341 qpair failed and we were unable to recover it. 00:25:57.341 [2024-12-11 15:02:40.084487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.341 [2024-12-11 15:02:40.084513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.341 qpair failed and we were unable to recover it. 00:25:57.341 [2024-12-11 15:02:40.084649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.341 [2024-12-11 15:02:40.084676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.341 qpair failed and we were unable to recover it. 00:25:57.341 [2024-12-11 15:02:40.084775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.341 [2024-12-11 15:02:40.084800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.341 qpair failed and we were unable to recover it. 00:25:57.341 [2024-12-11 15:02:40.084893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.341 [2024-12-11 15:02:40.084920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.341 qpair failed and we were unable to recover it. 00:25:57.341 [2024-12-11 15:02:40.085045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.341 [2024-12-11 15:02:40.085072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.341 qpair failed and we were unable to recover it. 00:25:57.341 [2024-12-11 15:02:40.085166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.341 [2024-12-11 15:02:40.085192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.341 qpair failed and we were unable to recover it. 00:25:57.341 [2024-12-11 15:02:40.085304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.341 [2024-12-11 15:02:40.085330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.341 qpair failed and we were unable to recover it. 00:25:57.341 [2024-12-11 15:02:40.085447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.341 [2024-12-11 15:02:40.085474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.341 qpair failed and we were unable to recover it. 00:25:57.341 [2024-12-11 15:02:40.085587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.341 [2024-12-11 15:02:40.085615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.341 qpair failed and we were unable to recover it. 00:25:57.341 [2024-12-11 15:02:40.085702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.341 [2024-12-11 15:02:40.085729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.341 qpair failed and we were unable to recover it. 00:25:57.341 [2024-12-11 15:02:40.085827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.341 [2024-12-11 15:02:40.085854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.341 qpair failed and we were unable to recover it. 00:25:57.341 [2024-12-11 15:02:40.085999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.341 [2024-12-11 15:02:40.086025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.341 qpair failed and we were unable to recover it. 00:25:57.341 [2024-12-11 15:02:40.086109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.341 [2024-12-11 15:02:40.086134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.341 qpair failed and we were unable to recover it. 00:25:57.341 [2024-12-11 15:02:40.086213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.341 [2024-12-11 15:02:40.086239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.341 qpair failed and we were unable to recover it. 00:25:57.341 [2024-12-11 15:02:40.086358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.341 [2024-12-11 15:02:40.086386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.341 qpair failed and we were unable to recover it. 00:25:57.341 [2024-12-11 15:02:40.086535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.341 [2024-12-11 15:02:40.086567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.341 qpair failed and we were unable to recover it. 00:25:57.341 [2024-12-11 15:02:40.086666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.341 [2024-12-11 15:02:40.086692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.341 qpair failed and we were unable to recover it. 00:25:57.341 [2024-12-11 15:02:40.086786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.341 [2024-12-11 15:02:40.086813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.341 qpair failed and we were unable to recover it. 00:25:57.341 [2024-12-11 15:02:40.086899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.341 [2024-12-11 15:02:40.086925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.341 qpair failed and we were unable to recover it. 00:25:57.341 [2024-12-11 15:02:40.087031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.341 [2024-12-11 15:02:40.087057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.341 qpair failed and we were unable to recover it. 00:25:57.341 [2024-12-11 15:02:40.087163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.341 [2024-12-11 15:02:40.087189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.341 qpair failed and we were unable to recover it. 00:25:57.342 [2024-12-11 15:02:40.087301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.342 [2024-12-11 15:02:40.087326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.342 qpair failed and we were unable to recover it. 00:25:57.342 [2024-12-11 15:02:40.087446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.342 [2024-12-11 15:02:40.087472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.342 qpair failed and we were unable to recover it. 00:25:57.342 [2024-12-11 15:02:40.087556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.342 [2024-12-11 15:02:40.087593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.342 qpair failed and we were unable to recover it. 00:25:57.342 [2024-12-11 15:02:40.087709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.342 [2024-12-11 15:02:40.087736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.342 qpair failed and we were unable to recover it. 00:25:57.342 [2024-12-11 15:02:40.087881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.342 [2024-12-11 15:02:40.087908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.342 qpair failed and we were unable to recover it. 00:25:57.342 [2024-12-11 15:02:40.087993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.342 [2024-12-11 15:02:40.088019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.342 qpair failed and we were unable to recover it. 00:25:57.342 [2024-12-11 15:02:40.088160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.342 [2024-12-11 15:02:40.088186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.342 qpair failed and we were unable to recover it. 00:25:57.342 [2024-12-11 15:02:40.088305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.342 [2024-12-11 15:02:40.088332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.342 qpair failed and we were unable to recover it. 00:25:57.342 [2024-12-11 15:02:40.088470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.342 [2024-12-11 15:02:40.088510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.342 qpair failed and we were unable to recover it. 00:25:57.342 [2024-12-11 15:02:40.088625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.342 [2024-12-11 15:02:40.088654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.342 qpair failed and we were unable to recover it. 00:25:57.342 [2024-12-11 15:02:40.088782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.342 [2024-12-11 15:02:40.088809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.342 qpair failed and we were unable to recover it. 00:25:57.342 [2024-12-11 15:02:40.089034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.342 [2024-12-11 15:02:40.089096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.342 qpair failed and we were unable to recover it. 00:25:57.342 [2024-12-11 15:02:40.089217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.342 [2024-12-11 15:02:40.089242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.342 qpair failed and we were unable to recover it. 00:25:57.342 [2024-12-11 15:02:40.089388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.342 [2024-12-11 15:02:40.089414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.342 qpair failed and we were unable to recover it. 00:25:57.342 [2024-12-11 15:02:40.089534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.342 [2024-12-11 15:02:40.089576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.342 qpair failed and we were unable to recover it. 00:25:57.342 [2024-12-11 15:02:40.089701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.342 [2024-12-11 15:02:40.089728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.342 qpair failed and we were unable to recover it. 00:25:57.342 [2024-12-11 15:02:40.089821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.342 [2024-12-11 15:02:40.089848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.342 qpair failed and we were unable to recover it. 00:25:57.342 [2024-12-11 15:02:40.089997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.342 [2024-12-11 15:02:40.090023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.342 qpair failed and we were unable to recover it. 00:25:57.342 [2024-12-11 15:02:40.090136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.342 [2024-12-11 15:02:40.090162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.342 qpair failed and we were unable to recover it. 00:25:57.342 [2024-12-11 15:02:40.090252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.342 [2024-12-11 15:02:40.090278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.342 qpair failed and we were unable to recover it. 00:25:57.342 [2024-12-11 15:02:40.090404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.342 [2024-12-11 15:02:40.090444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.342 qpair failed and we were unable to recover it. 00:25:57.342 [2024-12-11 15:02:40.090576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.342 [2024-12-11 15:02:40.090605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.342 qpair failed and we were unable to recover it. 00:25:57.342 [2024-12-11 15:02:40.090695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.342 [2024-12-11 15:02:40.090721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.342 qpair failed and we were unable to recover it. 00:25:57.342 [2024-12-11 15:02:40.090842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.342 [2024-12-11 15:02:40.090869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.342 qpair failed and we were unable to recover it. 00:25:57.342 [2024-12-11 15:02:40.090978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.342 [2024-12-11 15:02:40.091004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.342 qpair failed and we were unable to recover it. 00:25:57.342 [2024-12-11 15:02:40.091097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.342 [2024-12-11 15:02:40.091123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.342 qpair failed and we were unable to recover it. 00:25:57.342 [2024-12-11 15:02:40.091235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.342 [2024-12-11 15:02:40.091261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.342 qpair failed and we were unable to recover it. 00:25:57.342 [2024-12-11 15:02:40.091346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.342 [2024-12-11 15:02:40.091372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.342 qpair failed and we were unable to recover it. 00:25:57.342 [2024-12-11 15:02:40.091473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.342 [2024-12-11 15:02:40.091498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.342 qpair failed and we were unable to recover it. 00:25:57.342 [2024-12-11 15:02:40.091618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.342 [2024-12-11 15:02:40.091644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.342 qpair failed and we were unable to recover it. 00:25:57.342 [2024-12-11 15:02:40.091751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.342 [2024-12-11 15:02:40.091777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.342 qpair failed and we were unable to recover it. 00:25:57.342 [2024-12-11 15:02:40.091921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.342 [2024-12-11 15:02:40.091947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.342 qpair failed and we were unable to recover it. 00:25:57.342 [2024-12-11 15:02:40.092062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.342 [2024-12-11 15:02:40.092088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.342 qpair failed and we were unable to recover it. 00:25:57.342 [2024-12-11 15:02:40.092200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.342 [2024-12-11 15:02:40.092226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.342 qpair failed and we were unable to recover it. 00:25:57.342 [2024-12-11 15:02:40.092319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.342 [2024-12-11 15:02:40.092344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.342 qpair failed and we were unable to recover it. 00:25:57.342 [2024-12-11 15:02:40.092487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.342 [2024-12-11 15:02:40.092512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.342 qpair failed and we were unable to recover it. 00:25:57.342 [2024-12-11 15:02:40.092614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.342 [2024-12-11 15:02:40.092641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.342 qpair failed and we were unable to recover it. 00:25:57.342 [2024-12-11 15:02:40.092723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.342 [2024-12-11 15:02:40.092749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.342 qpair failed and we were unable to recover it. 00:25:57.342 [2024-12-11 15:02:40.092857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.342 [2024-12-11 15:02:40.092883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.342 qpair failed and we were unable to recover it. 00:25:57.342 [2024-12-11 15:02:40.092966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.343 [2024-12-11 15:02:40.092992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.343 qpair failed and we were unable to recover it. 00:25:57.343 [2024-12-11 15:02:40.093141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.343 [2024-12-11 15:02:40.093166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.343 qpair failed and we were unable to recover it. 00:25:57.343 [2024-12-11 15:02:40.093249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.343 [2024-12-11 15:02:40.093279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.343 qpair failed and we were unable to recover it. 00:25:57.343 [2024-12-11 15:02:40.093363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.343 [2024-12-11 15:02:40.093389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.343 qpair failed and we were unable to recover it. 00:25:57.343 [2024-12-11 15:02:40.093534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.343 [2024-12-11 15:02:40.093567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.343 qpair failed and we were unable to recover it. 00:25:57.343 [2024-12-11 15:02:40.093645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.343 [2024-12-11 15:02:40.093670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.343 qpair failed and we were unable to recover it. 00:25:57.343 [2024-12-11 15:02:40.093794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.343 [2024-12-11 15:02:40.093820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.343 qpair failed and we were unable to recover it. 00:25:57.343 [2024-12-11 15:02:40.093962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.343 [2024-12-11 15:02:40.093987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.343 qpair failed and we were unable to recover it. 00:25:57.343 [2024-12-11 15:02:40.094138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.343 [2024-12-11 15:02:40.094163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.343 qpair failed and we were unable to recover it. 00:25:57.343 [2024-12-11 15:02:40.094304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.343 [2024-12-11 15:02:40.094328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.343 qpair failed and we were unable to recover it. 00:25:57.343 [2024-12-11 15:02:40.094413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.343 [2024-12-11 15:02:40.094439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.343 qpair failed and we were unable to recover it. 00:25:57.343 [2024-12-11 15:02:40.094526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.343 [2024-12-11 15:02:40.094559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.343 qpair failed and we were unable to recover it. 00:25:57.343 [2024-12-11 15:02:40.094700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.343 [2024-12-11 15:02:40.094725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.343 qpair failed and we were unable to recover it. 00:25:57.343 [2024-12-11 15:02:40.094843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.343 [2024-12-11 15:02:40.094869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.343 qpair failed and we were unable to recover it. 00:25:57.343 [2024-12-11 15:02:40.094990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.343 [2024-12-11 15:02:40.095015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.343 qpair failed and we were unable to recover it. 00:25:57.343 [2024-12-11 15:02:40.095095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.343 [2024-12-11 15:02:40.095120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.343 qpair failed and we were unable to recover it. 00:25:57.343 [2024-12-11 15:02:40.095244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.343 [2024-12-11 15:02:40.095270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.343 qpair failed and we were unable to recover it. 00:25:57.343 [2024-12-11 15:02:40.095385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.343 [2024-12-11 15:02:40.095411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.343 qpair failed and we were unable to recover it. 00:25:57.343 [2024-12-11 15:02:40.095493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.343 [2024-12-11 15:02:40.095518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.343 qpair failed and we were unable to recover it. 00:25:57.343 [2024-12-11 15:02:40.095643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.343 [2024-12-11 15:02:40.095669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.343 qpair failed and we were unable to recover it. 00:25:57.343 [2024-12-11 15:02:40.095772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.343 [2024-12-11 15:02:40.095810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.343 qpair failed and we were unable to recover it. 00:25:57.343 [2024-12-11 15:02:40.095938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.343 [2024-12-11 15:02:40.095966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.343 qpair failed and we were unable to recover it. 00:25:57.343 [2024-12-11 15:02:40.096089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.343 [2024-12-11 15:02:40.096116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.343 qpair failed and we were unable to recover it. 00:25:57.343 [2024-12-11 15:02:40.096238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.343 [2024-12-11 15:02:40.096266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.343 qpair failed and we were unable to recover it. 00:25:57.343 [2024-12-11 15:02:40.096380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.343 [2024-12-11 15:02:40.096406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.343 qpair failed and we were unable to recover it. 00:25:57.343 [2024-12-11 15:02:40.096520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.343 [2024-12-11 15:02:40.096552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.343 qpair failed and we were unable to recover it. 00:25:57.343 [2024-12-11 15:02:40.096637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.343 [2024-12-11 15:02:40.096665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.343 qpair failed and we were unable to recover it. 00:25:57.343 [2024-12-11 15:02:40.096743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.343 [2024-12-11 15:02:40.096769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.343 qpair failed and we were unable to recover it. 00:25:57.343 [2024-12-11 15:02:40.096885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.343 [2024-12-11 15:02:40.096911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.343 qpair failed and we were unable to recover it. 00:25:57.343 [2024-12-11 15:02:40.097024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.343 [2024-12-11 15:02:40.097054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.343 qpair failed and we were unable to recover it. 00:25:57.343 [2024-12-11 15:02:40.097162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.343 [2024-12-11 15:02:40.097187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.343 qpair failed and we were unable to recover it. 00:25:57.343 [2024-12-11 15:02:40.097275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.343 [2024-12-11 15:02:40.097301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.343 qpair failed and we were unable to recover it. 00:25:57.343 [2024-12-11 15:02:40.097408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.343 [2024-12-11 15:02:40.097433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.343 qpair failed and we were unable to recover it. 00:25:57.343 [2024-12-11 15:02:40.097520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.343 [2024-12-11 15:02:40.097556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.343 qpair failed and we were unable to recover it. 00:25:57.343 [2024-12-11 15:02:40.097674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.343 [2024-12-11 15:02:40.097700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.343 qpair failed and we were unable to recover it. 00:25:57.343 [2024-12-11 15:02:40.097780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.343 [2024-12-11 15:02:40.097805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.343 qpair failed and we were unable to recover it. 00:25:57.343 [2024-12-11 15:02:40.097920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.343 [2024-12-11 15:02:40.097945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.343 qpair failed and we were unable to recover it. 00:25:57.344 [2024-12-11 15:02:40.098059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.344 [2024-12-11 15:02:40.098085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.344 qpair failed and we were unable to recover it. 00:25:57.344 [2024-12-11 15:02:40.098191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.344 [2024-12-11 15:02:40.098216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.344 qpair failed and we were unable to recover it. 00:25:57.344 [2024-12-11 15:02:40.098292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.344 [2024-12-11 15:02:40.098317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.344 qpair failed and we were unable to recover it. 00:25:57.344 [2024-12-11 15:02:40.098406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.344 [2024-12-11 15:02:40.098433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.344 qpair failed and we were unable to recover it. 00:25:57.344 [2024-12-11 15:02:40.098540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.344 [2024-12-11 15:02:40.098575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.344 qpair failed and we were unable to recover it. 00:25:57.344 [2024-12-11 15:02:40.098690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.344 [2024-12-11 15:02:40.098715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.344 qpair failed and we were unable to recover it. 00:25:57.344 [2024-12-11 15:02:40.098830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.344 [2024-12-11 15:02:40.098856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.344 qpair failed and we were unable to recover it. 00:25:57.344 [2024-12-11 15:02:40.098931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.344 [2024-12-11 15:02:40.098956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.344 qpair failed and we were unable to recover it. 00:25:57.344 [2024-12-11 15:02:40.099040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.344 [2024-12-11 15:02:40.099065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.344 qpair failed and we were unable to recover it. 00:25:57.344 [2024-12-11 15:02:40.099142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.344 [2024-12-11 15:02:40.099168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.344 qpair failed and we were unable to recover it. 00:25:57.344 [2024-12-11 15:02:40.099281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.344 [2024-12-11 15:02:40.099306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.344 qpair failed and we were unable to recover it. 00:25:57.344 [2024-12-11 15:02:40.099415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.344 [2024-12-11 15:02:40.099439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.344 qpair failed and we were unable to recover it. 00:25:57.344 [2024-12-11 15:02:40.099524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.344 [2024-12-11 15:02:40.099557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.344 qpair failed and we were unable to recover it. 00:25:57.344 [2024-12-11 15:02:40.099669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.344 [2024-12-11 15:02:40.099694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.344 qpair failed and we were unable to recover it. 00:25:57.344 [2024-12-11 15:02:40.099771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.344 [2024-12-11 15:02:40.099796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.344 qpair failed and we were unable to recover it. 00:25:57.344 [2024-12-11 15:02:40.099910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.344 [2024-12-11 15:02:40.099935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.344 qpair failed and we were unable to recover it. 00:25:57.344 [2024-12-11 15:02:40.100014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.344 [2024-12-11 15:02:40.100039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.344 qpair failed and we were unable to recover it. 00:25:57.344 [2024-12-11 15:02:40.100155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.344 [2024-12-11 15:02:40.100180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.344 qpair failed and we were unable to recover it. 00:25:57.344 [2024-12-11 15:02:40.100293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.344 [2024-12-11 15:02:40.100319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.344 qpair failed and we were unable to recover it. 00:25:57.344 [2024-12-11 15:02:40.100437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.344 [2024-12-11 15:02:40.100462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.344 qpair failed and we were unable to recover it. 00:25:57.344 [2024-12-11 15:02:40.100589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.344 [2024-12-11 15:02:40.100615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.344 qpair failed and we were unable to recover it. 00:25:57.344 [2024-12-11 15:02:40.100712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.344 [2024-12-11 15:02:40.100737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.344 qpair failed and we were unable to recover it. 00:25:57.344 [2024-12-11 15:02:40.100819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.344 [2024-12-11 15:02:40.100845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.344 qpair failed and we were unable to recover it. 00:25:57.344 [2024-12-11 15:02:40.100957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.344 [2024-12-11 15:02:40.100982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.344 qpair failed and we were unable to recover it. 00:25:57.344 [2024-12-11 15:02:40.101105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.344 [2024-12-11 15:02:40.101130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.344 qpair failed and we were unable to recover it. 00:25:57.344 [2024-12-11 15:02:40.101214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.344 [2024-12-11 15:02:40.101239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.344 qpair failed and we were unable to recover it. 00:25:57.344 [2024-12-11 15:02:40.101356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.344 [2024-12-11 15:02:40.101381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.344 qpair failed and we were unable to recover it. 00:25:57.344 [2024-12-11 15:02:40.101492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.344 [2024-12-11 15:02:40.101518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.344 qpair failed and we were unable to recover it. 00:25:57.344 [2024-12-11 15:02:40.101631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.344 [2024-12-11 15:02:40.101657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.344 qpair failed and we were unable to recover it. 00:25:57.344 [2024-12-11 15:02:40.101775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.344 [2024-12-11 15:02:40.101799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.344 qpair failed and we were unable to recover it. 00:25:57.344 [2024-12-11 15:02:40.101911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.344 [2024-12-11 15:02:40.101936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.344 qpair failed and we were unable to recover it. 00:25:57.344 [2024-12-11 15:02:40.102026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.344 [2024-12-11 15:02:40.102052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.344 qpair failed and we were unable to recover it. 00:25:57.344 [2024-12-11 15:02:40.102134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.344 [2024-12-11 15:02:40.102159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.344 qpair failed and we were unable to recover it. 00:25:57.344 [2024-12-11 15:02:40.102269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.344 [2024-12-11 15:02:40.102305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.344 qpair failed and we were unable to recover it. 00:25:57.344 [2024-12-11 15:02:40.102433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.344 [2024-12-11 15:02:40.102460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.344 qpair failed and we were unable to recover it. 00:25:57.344 [2024-12-11 15:02:40.102588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.344 [2024-12-11 15:02:40.102616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.344 qpair failed and we were unable to recover it. 00:25:57.345 [2024-12-11 15:02:40.102707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.345 [2024-12-11 15:02:40.102733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.345 qpair failed and we were unable to recover it. 00:25:57.345 [2024-12-11 15:02:40.102876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.345 [2024-12-11 15:02:40.102901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.345 qpair failed and we were unable to recover it. 00:25:57.345 [2024-12-11 15:02:40.103021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.345 [2024-12-11 15:02:40.103046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.345 qpair failed and we were unable to recover it. 00:25:57.345 [2024-12-11 15:02:40.103131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.345 [2024-12-11 15:02:40.103157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.345 qpair failed and we were unable to recover it. 00:25:57.345 [2024-12-11 15:02:40.103271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.345 [2024-12-11 15:02:40.103297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.345 qpair failed and we were unable to recover it. 00:25:57.345 [2024-12-11 15:02:40.103389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.345 [2024-12-11 15:02:40.103418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.345 qpair failed and we were unable to recover it. 00:25:57.345 [2024-12-11 15:02:40.103512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.345 [2024-12-11 15:02:40.103537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.345 qpair failed and we were unable to recover it. 00:25:57.345 [2024-12-11 15:02:40.103656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.345 [2024-12-11 15:02:40.103684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.345 qpair failed and we were unable to recover it. 00:25:57.345 [2024-12-11 15:02:40.103772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.345 [2024-12-11 15:02:40.103797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.345 qpair failed and we were unable to recover it. 00:25:57.345 [2024-12-11 15:02:40.103915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.345 [2024-12-11 15:02:40.103940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.345 qpair failed and we were unable to recover it. 00:25:57.345 [2024-12-11 15:02:40.104053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.345 [2024-12-11 15:02:40.104079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.345 qpair failed and we were unable to recover it. 00:25:57.345 [2024-12-11 15:02:40.104166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.345 [2024-12-11 15:02:40.104192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.345 qpair failed and we were unable to recover it. 00:25:57.345 [2024-12-11 15:02:40.104315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.345 [2024-12-11 15:02:40.104341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.345 qpair failed and we were unable to recover it. 00:25:57.345 [2024-12-11 15:02:40.104458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.345 [2024-12-11 15:02:40.104483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.345 qpair failed and we were unable to recover it. 00:25:57.345 [2024-12-11 15:02:40.104596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.345 [2024-12-11 15:02:40.104624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.345 qpair failed and we were unable to recover it. 00:25:57.345 [2024-12-11 15:02:40.104739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.345 [2024-12-11 15:02:40.104767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.345 qpair failed and we were unable to recover it. 00:25:57.345 [2024-12-11 15:02:40.104899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.345 [2024-12-11 15:02:40.104938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.345 qpair failed and we were unable to recover it. 00:25:57.345 [2024-12-11 15:02:40.105068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.345 [2024-12-11 15:02:40.105095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.345 qpair failed and we were unable to recover it. 00:25:57.345 [2024-12-11 15:02:40.105196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.345 [2024-12-11 15:02:40.105224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.345 qpair failed and we were unable to recover it. 00:25:57.345 [2024-12-11 15:02:40.105338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.345 [2024-12-11 15:02:40.105364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.345 qpair failed and we were unable to recover it. 00:25:57.345 [2024-12-11 15:02:40.105477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.345 [2024-12-11 15:02:40.105504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.345 qpair failed and we were unable to recover it. 00:25:57.345 [2024-12-11 15:02:40.105648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.345 [2024-12-11 15:02:40.105676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.345 qpair failed and we were unable to recover it. 00:25:57.345 [2024-12-11 15:02:40.105752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.345 [2024-12-11 15:02:40.105778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.345 qpair failed and we were unable to recover it. 00:25:57.345 Read completed with error (sct=0, sc=8) 00:25:57.345 starting I/O failed 00:25:57.345 Read completed with error (sct=0, sc=8) 00:25:57.345 starting I/O failed 00:25:57.345 Read completed with error (sct=0, sc=8) 00:25:57.345 starting I/O failed 00:25:57.345 Read completed with error (sct=0, sc=8) 00:25:57.345 starting I/O failed 00:25:57.345 Read completed with error (sct=0, sc=8) 00:25:57.345 starting I/O failed 00:25:57.345 Read completed with error (sct=0, sc=8) 00:25:57.345 starting I/O failed 00:25:57.345 Read completed with error (sct=0, sc=8) 00:25:57.345 starting I/O failed 00:25:57.345 Read completed with error (sct=0, sc=8) 00:25:57.345 starting I/O failed 00:25:57.345 Read completed with error (sct=0, sc=8) 00:25:57.345 starting I/O failed 00:25:57.345 Read completed with error (sct=0, sc=8) 00:25:57.345 starting I/O failed 00:25:57.345 Read completed with error (sct=0, sc=8) 00:25:57.345 starting I/O failed 00:25:57.345 Read completed with error (sct=0, sc=8) 00:25:57.345 starting I/O failed 00:25:57.345 Read completed with error (sct=0, sc=8) 00:25:57.345 starting I/O failed 00:25:57.345 Read completed with error (sct=0, sc=8) 00:25:57.345 starting I/O failed 00:25:57.345 Read completed with error (sct=0, sc=8) 00:25:57.345 starting I/O failed 00:25:57.345 Read completed with error (sct=0, sc=8) 00:25:57.345 starting I/O failed 00:25:57.345 Read completed with error (sct=0, sc=8) 00:25:57.345 starting I/O failed 00:25:57.345 Write completed with error (sct=0, sc=8) 00:25:57.345 starting I/O failed 00:25:57.345 Read completed with error (sct=0, sc=8) 00:25:57.345 starting I/O failed 00:25:57.345 Write completed with error (sct=0, sc=8) 00:25:57.345 starting I/O failed 00:25:57.345 Write completed with error (sct=0, sc=8) 00:25:57.345 starting I/O failed 00:25:57.345 Read completed with error (sct=0, sc=8) 00:25:57.345 starting I/O failed 00:25:57.345 Read completed with error (sct=0, sc=8) 00:25:57.345 starting I/O failed 00:25:57.345 Read completed with error (sct=0, sc=8) 00:25:57.345 starting I/O failed 00:25:57.345 Write completed with error (sct=0, sc=8) 00:25:57.345 starting I/O failed 00:25:57.345 Read completed with error (sct=0, sc=8) 00:25:57.345 starting I/O failed 00:25:57.345 Read completed with error (sct=0, sc=8) 00:25:57.345 starting I/O failed 00:25:57.345 Read completed with error (sct=0, sc=8) 00:25:57.345 starting I/O failed 00:25:57.345 Read completed with error (sct=0, sc=8) 00:25:57.345 starting I/O failed 00:25:57.345 Read completed with error (sct=0, sc=8) 00:25:57.345 starting I/O failed 00:25:57.345 Write completed with error (sct=0, sc=8) 00:25:57.345 starting I/O failed 00:25:57.345 Write completed with error (sct=0, sc=8) 00:25:57.345 starting I/O failed 00:25:57.345 [2024-12-11 15:02:40.106110] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:57.345 [2024-12-11 15:02:40.106209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.345 [2024-12-11 15:02:40.106243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.345 qpair failed and we were unable to recover it. 00:25:57.345 [2024-12-11 15:02:40.106383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.345 [2024-12-11 15:02:40.106411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.345 qpair failed and we were unable to recover it. 00:25:57.345 [2024-12-11 15:02:40.106529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.345 [2024-12-11 15:02:40.106569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.345 qpair failed and we were unable to recover it. 00:25:57.345 [2024-12-11 15:02:40.106688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.345 [2024-12-11 15:02:40.106714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.345 qpair failed and we were unable to recover it. 00:25:57.345 [2024-12-11 15:02:40.106829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.345 [2024-12-11 15:02:40.106855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.345 qpair failed and we were unable to recover it. 00:25:57.345 [2024-12-11 15:02:40.106932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.631 [2024-12-11 15:02:40.106957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.631 qpair failed and we were unable to recover it. 00:25:57.631 [2024-12-11 15:02:40.107117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.631 [2024-12-11 15:02:40.107160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.631 qpair failed and we were unable to recover it. 00:25:57.631 [2024-12-11 15:02:40.107266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.631 [2024-12-11 15:02:40.107291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.631 qpair failed and we were unable to recover it. 00:25:57.631 [2024-12-11 15:02:40.107424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.631 [2024-12-11 15:02:40.107464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.631 qpair failed and we were unable to recover it. 00:25:57.631 [2024-12-11 15:02:40.107565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.631 [2024-12-11 15:02:40.107594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.631 qpair failed and we were unable to recover it. 00:25:57.631 [2024-12-11 15:02:40.107744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.631 [2024-12-11 15:02:40.107771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.631 qpair failed and we were unable to recover it. 00:25:57.631 [2024-12-11 15:02:40.107857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.631 [2024-12-11 15:02:40.107884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.631 qpair failed and we were unable to recover it. 00:25:57.631 [2024-12-11 15:02:40.108027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.631 [2024-12-11 15:02:40.108055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.631 qpair failed and we were unable to recover it. 00:25:57.631 [2024-12-11 15:02:40.108166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.631 [2024-12-11 15:02:40.108192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.631 qpair failed and we were unable to recover it. 00:25:57.631 [2024-12-11 15:02:40.108305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.631 [2024-12-11 15:02:40.108332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.631 qpair failed and we were unable to recover it. 00:25:57.631 [2024-12-11 15:02:40.108478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.631 [2024-12-11 15:02:40.108504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.631 qpair failed and we were unable to recover it. 00:25:57.631 [2024-12-11 15:02:40.108610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.631 [2024-12-11 15:02:40.108641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.631 qpair failed and we were unable to recover it. 00:25:57.631 [2024-12-11 15:02:40.108762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.632 [2024-12-11 15:02:40.108789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.632 qpair failed and we were unable to recover it. 00:25:57.632 [2024-12-11 15:02:40.108903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.632 [2024-12-11 15:02:40.108929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.632 qpair failed and we were unable to recover it. 00:25:57.632 [2024-12-11 15:02:40.109046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.632 [2024-12-11 15:02:40.109079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.632 qpair failed and we were unable to recover it. 00:25:57.632 [2024-12-11 15:02:40.109225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.632 [2024-12-11 15:02:40.109272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.632 qpair failed and we were unable to recover it. 00:25:57.632 [2024-12-11 15:02:40.109404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.632 [2024-12-11 15:02:40.109447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0de0000b90 with addr=10.0.0.2, port=4420 00:25:57.632 qpair failed and we were unable to recover it. 00:25:57.632 [2024-12-11 15:02:40.109561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.632 [2024-12-11 15:02:40.109590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.632 qpair failed and we were unable to recover it. 00:25:57.632 [2024-12-11 15:02:40.109672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.632 [2024-12-11 15:02:40.109699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.632 qpair failed and we were unable to recover it. 00:25:57.632 [2024-12-11 15:02:40.109786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.632 [2024-12-11 15:02:40.109812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.632 qpair failed and we were unable to recover it. 00:25:57.632 [2024-12-11 15:02:40.109918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.632 [2024-12-11 15:02:40.109961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.632 qpair failed and we were unable to recover it. 00:25:57.632 [2024-12-11 15:02:40.110127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.632 [2024-12-11 15:02:40.110171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.632 qpair failed and we were unable to recover it. 00:25:57.632 [2024-12-11 15:02:40.110285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.632 [2024-12-11 15:02:40.110311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.632 qpair failed and we were unable to recover it. 00:25:57.632 [2024-12-11 15:02:40.110401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.632 [2024-12-11 15:02:40.110429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.632 qpair failed and we were unable to recover it. 00:25:57.632 [2024-12-11 15:02:40.110560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.632 [2024-12-11 15:02:40.110600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.632 qpair failed and we were unable to recover it. 00:25:57.632 [2024-12-11 15:02:40.110720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.632 [2024-12-11 15:02:40.110749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.632 qpair failed and we were unable to recover it. 00:25:57.632 [2024-12-11 15:02:40.110846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.632 [2024-12-11 15:02:40.110873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.632 qpair failed and we were unable to recover it. 00:25:57.632 [2024-12-11 15:02:40.110990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.632 [2024-12-11 15:02:40.111016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.632 qpair failed and we were unable to recover it. 00:25:57.632 [2024-12-11 15:02:40.111153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.632 [2024-12-11 15:02:40.111203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.632 qpair failed and we were unable to recover it. 00:25:57.632 [2024-12-11 15:02:40.111363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.632 [2024-12-11 15:02:40.111396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.632 qpair failed and we were unable to recover it. 00:25:57.632 [2024-12-11 15:02:40.111488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.632 [2024-12-11 15:02:40.111514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.632 qpair failed and we were unable to recover it. 00:25:57.632 [2024-12-11 15:02:40.111614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.632 [2024-12-11 15:02:40.111643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.632 qpair failed and we were unable to recover it. 00:25:57.632 [2024-12-11 15:02:40.111728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.632 [2024-12-11 15:02:40.111754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.632 qpair failed and we were unable to recover it. 00:25:57.632 [2024-12-11 15:02:40.111896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.632 [2024-12-11 15:02:40.111922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.632 qpair failed and we were unable to recover it. 00:25:57.632 [2024-12-11 15:02:40.112059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.632 [2024-12-11 15:02:40.112102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.632 qpair failed and we were unable to recover it. 00:25:57.632 [2024-12-11 15:02:40.112247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.632 [2024-12-11 15:02:40.112281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.632 qpair failed and we were unable to recover it. 00:25:57.632 [2024-12-11 15:02:40.112422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.632 [2024-12-11 15:02:40.112448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.632 qpair failed and we were unable to recover it. 00:25:57.632 [2024-12-11 15:02:40.112566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.632 [2024-12-11 15:02:40.112593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.632 qpair failed and we were unable to recover it. 00:25:57.632 [2024-12-11 15:02:40.112738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.632 [2024-12-11 15:02:40.112765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.632 qpair failed and we were unable to recover it. 00:25:57.632 [2024-12-11 15:02:40.112931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.632 [2024-12-11 15:02:40.112975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.632 qpair failed and we were unable to recover it. 00:25:57.632 [2024-12-11 15:02:40.113136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.632 [2024-12-11 15:02:40.113180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.632 qpair failed and we were unable to recover it. 00:25:57.632 [2024-12-11 15:02:40.113263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.632 [2024-12-11 15:02:40.113289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.632 qpair failed and we were unable to recover it. 00:25:57.632 [2024-12-11 15:02:40.113399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.632 [2024-12-11 15:02:40.113424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.632 qpair failed and we were unable to recover it. 00:25:57.632 [2024-12-11 15:02:40.113579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.632 [2024-12-11 15:02:40.113606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.632 qpair failed and we were unable to recover it. 00:25:57.632 [2024-12-11 15:02:40.113683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.632 [2024-12-11 15:02:40.113708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.632 qpair failed and we were unable to recover it. 00:25:57.632 [2024-12-11 15:02:40.113833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.632 [2024-12-11 15:02:40.113861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.632 qpair failed and we were unable to recover it. 00:25:57.632 [2024-12-11 15:02:40.114052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.632 [2024-12-11 15:02:40.114078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.632 qpair failed and we were unable to recover it. 00:25:57.632 [2024-12-11 15:02:40.114154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.632 [2024-12-11 15:02:40.114179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.632 qpair failed and we were unable to recover it. 00:25:57.632 [2024-12-11 15:02:40.114299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.632 [2024-12-11 15:02:40.114325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.632 qpair failed and we were unable to recover it. 00:25:57.632 [2024-12-11 15:02:40.114439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.632 [2024-12-11 15:02:40.114464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.632 qpair failed and we were unable to recover it. 00:25:57.632 [2024-12-11 15:02:40.114558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.632 [2024-12-11 15:02:40.114583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.632 qpair failed and we were unable to recover it. 00:25:57.632 [2024-12-11 15:02:40.114670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.633 [2024-12-11 15:02:40.114697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.633 qpair failed and we were unable to recover it. 00:25:57.633 [2024-12-11 15:02:40.114842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.633 [2024-12-11 15:02:40.114868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.633 qpair failed and we were unable to recover it. 00:25:57.633 [2024-12-11 15:02:40.114953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.633 [2024-12-11 15:02:40.114978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.633 qpair failed and we were unable to recover it. 00:25:57.633 [2024-12-11 15:02:40.115095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.633 [2024-12-11 15:02:40.115124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.633 qpair failed and we were unable to recover it. 00:25:57.633 [2024-12-11 15:02:40.115217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.633 [2024-12-11 15:02:40.115245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.633 qpair failed and we were unable to recover it. 00:25:57.633 [2024-12-11 15:02:40.115342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.633 [2024-12-11 15:02:40.115373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.633 qpair failed and we were unable to recover it. 00:25:57.633 [2024-12-11 15:02:40.115466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.633 [2024-12-11 15:02:40.115493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.633 qpair failed and we were unable to recover it. 00:25:57.633 [2024-12-11 15:02:40.115638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.633 [2024-12-11 15:02:40.115667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.633 qpair failed and we were unable to recover it. 00:25:57.633 [2024-12-11 15:02:40.115813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.633 [2024-12-11 15:02:40.115839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.633 qpair failed and we were unable to recover it. 00:25:57.633 [2024-12-11 15:02:40.116002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.633 [2024-12-11 15:02:40.116046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.633 qpair failed and we were unable to recover it. 00:25:57.633 [2024-12-11 15:02:40.116153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.633 [2024-12-11 15:02:40.116182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.633 qpair failed and we were unable to recover it. 00:25:57.633 [2024-12-11 15:02:40.116312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.633 [2024-12-11 15:02:40.116339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.633 qpair failed and we were unable to recover it. 00:25:57.633 [2024-12-11 15:02:40.116480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.633 [2024-12-11 15:02:40.116506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.633 qpair failed and we were unable to recover it. 00:25:57.633 [2024-12-11 15:02:40.116605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.633 [2024-12-11 15:02:40.116631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.633 qpair failed and we were unable to recover it. 00:25:57.633 [2024-12-11 15:02:40.116745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.633 [2024-12-11 15:02:40.116772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.633 qpair failed and we were unable to recover it. 00:25:57.633 [2024-12-11 15:02:40.116929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.633 [2024-12-11 15:02:40.116968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.633 qpair failed and we were unable to recover it. 00:25:57.633 [2024-12-11 15:02:40.117096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.633 [2024-12-11 15:02:40.117150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.633 qpair failed and we were unable to recover it. 00:25:57.633 [2024-12-11 15:02:40.117335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.633 [2024-12-11 15:02:40.117388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.633 qpair failed and we were unable to recover it. 00:25:57.633 [2024-12-11 15:02:40.117479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.633 [2024-12-11 15:02:40.117505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.633 qpair failed and we were unable to recover it. 00:25:57.633 [2024-12-11 15:02:40.117615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.633 [2024-12-11 15:02:40.117655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.633 qpair failed and we were unable to recover it. 00:25:57.633 [2024-12-11 15:02:40.117816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.633 [2024-12-11 15:02:40.117846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.633 qpair failed and we were unable to recover it. 00:25:57.633 [2024-12-11 15:02:40.117968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.633 [2024-12-11 15:02:40.117996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.633 qpair failed and we were unable to recover it. 00:25:57.633 [2024-12-11 15:02:40.118199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.633 [2024-12-11 15:02:40.118266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.633 qpair failed and we were unable to recover it. 00:25:57.633 [2024-12-11 15:02:40.118379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.633 [2024-12-11 15:02:40.118404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.633 qpair failed and we were unable to recover it. 00:25:57.633 [2024-12-11 15:02:40.118562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.633 [2024-12-11 15:02:40.118591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.633 qpair failed and we were unable to recover it. 00:25:57.633 [2024-12-11 15:02:40.118673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.633 [2024-12-11 15:02:40.118700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.633 qpair failed and we were unable to recover it. 00:25:57.633 [2024-12-11 15:02:40.118791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.633 [2024-12-11 15:02:40.118817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.633 qpair failed and we were unable to recover it. 00:25:57.633 [2024-12-11 15:02:40.118911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.633 [2024-12-11 15:02:40.118938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.633 qpair failed and we were unable to recover it. 00:25:57.633 [2024-12-11 15:02:40.119076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.633 [2024-12-11 15:02:40.119105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.633 qpair failed and we were unable to recover it. 00:25:57.633 [2024-12-11 15:02:40.119297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.633 [2024-12-11 15:02:40.119326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.633 qpair failed and we were unable to recover it. 00:25:57.633 [2024-12-11 15:02:40.119422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.633 [2024-12-11 15:02:40.119447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.633 qpair failed and we were unable to recover it. 00:25:57.633 [2024-12-11 15:02:40.119565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.633 [2024-12-11 15:02:40.119592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.633 qpair failed and we were unable to recover it. 00:25:57.633 [2024-12-11 15:02:40.119736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.633 [2024-12-11 15:02:40.119767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.633 qpair failed and we were unable to recover it. 00:25:57.633 [2024-12-11 15:02:40.119913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.633 [2024-12-11 15:02:40.119949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.633 qpair failed and we were unable to recover it. 00:25:57.633 [2024-12-11 15:02:40.120069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.633 [2024-12-11 15:02:40.120096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.633 qpair failed and we were unable to recover it. 00:25:57.633 [2024-12-11 15:02:40.120179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.633 [2024-12-11 15:02:40.120205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.633 qpair failed and we were unable to recover it. 00:25:57.633 [2024-12-11 15:02:40.120314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.633 [2024-12-11 15:02:40.120341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.633 qpair failed and we were unable to recover it. 00:25:57.633 [2024-12-11 15:02:40.120458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.633 [2024-12-11 15:02:40.120487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.633 qpair failed and we were unable to recover it. 00:25:57.633 [2024-12-11 15:02:40.120582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.633 [2024-12-11 15:02:40.120609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.633 qpair failed and we were unable to recover it. 00:25:57.634 [2024-12-11 15:02:40.120738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.634 [2024-12-11 15:02:40.120777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.634 qpair failed and we were unable to recover it. 00:25:57.634 [2024-12-11 15:02:40.120898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.634 [2024-12-11 15:02:40.120926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.634 qpair failed and we were unable to recover it. 00:25:57.634 [2024-12-11 15:02:40.121050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.634 [2024-12-11 15:02:40.121080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.634 qpair failed and we were unable to recover it. 00:25:57.634 [2024-12-11 15:02:40.121204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.634 [2024-12-11 15:02:40.121231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.634 qpair failed and we were unable to recover it. 00:25:57.634 [2024-12-11 15:02:40.121343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.634 [2024-12-11 15:02:40.121369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.634 qpair failed and we were unable to recover it. 00:25:57.634 [2024-12-11 15:02:40.121488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.634 [2024-12-11 15:02:40.121513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.634 qpair failed and we were unable to recover it. 00:25:57.634 [2024-12-11 15:02:40.121604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.634 [2024-12-11 15:02:40.121631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.634 qpair failed and we were unable to recover it. 00:25:57.634 [2024-12-11 15:02:40.121722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.634 [2024-12-11 15:02:40.121748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.634 qpair failed and we were unable to recover it. 00:25:57.634 [2024-12-11 15:02:40.121904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.634 [2024-12-11 15:02:40.121929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.634 qpair failed and we were unable to recover it. 00:25:57.634 [2024-12-11 15:02:40.122073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.634 [2024-12-11 15:02:40.122100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.634 qpair failed and we were unable to recover it. 00:25:57.634 [2024-12-11 15:02:40.122222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.634 [2024-12-11 15:02:40.122250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.634 qpair failed and we were unable to recover it. 00:25:57.634 [2024-12-11 15:02:40.122376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.634 [2024-12-11 15:02:40.122403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.634 qpair failed and we were unable to recover it. 00:25:57.634 [2024-12-11 15:02:40.122485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.634 [2024-12-11 15:02:40.122512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.634 qpair failed and we were unable to recover it. 00:25:57.634 [2024-12-11 15:02:40.122635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.634 [2024-12-11 15:02:40.122662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.634 qpair failed and we were unable to recover it. 00:25:57.634 [2024-12-11 15:02:40.122775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.634 [2024-12-11 15:02:40.122801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.634 qpair failed and we were unable to recover it. 00:25:57.634 [2024-12-11 15:02:40.122885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.634 [2024-12-11 15:02:40.122910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.634 qpair failed and we were unable to recover it. 00:25:57.634 [2024-12-11 15:02:40.123029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.634 [2024-12-11 15:02:40.123055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.634 qpair failed and we were unable to recover it. 00:25:57.634 [2024-12-11 15:02:40.123196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.634 [2024-12-11 15:02:40.123222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.634 qpair failed and we were unable to recover it. 00:25:57.634 [2024-12-11 15:02:40.123342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.634 [2024-12-11 15:02:40.123367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.634 qpair failed and we were unable to recover it. 00:25:57.634 [2024-12-11 15:02:40.123457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.634 [2024-12-11 15:02:40.123486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.634 qpair failed and we were unable to recover it. 00:25:57.634 [2024-12-11 15:02:40.123578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.634 [2024-12-11 15:02:40.123606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.634 qpair failed and we were unable to recover it. 00:25:57.634 [2024-12-11 15:02:40.123723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.634 [2024-12-11 15:02:40.123749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.634 qpair failed and we were unable to recover it. 00:25:57.634 [2024-12-11 15:02:40.123883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.634 [2024-12-11 15:02:40.123932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.634 qpair failed and we were unable to recover it. 00:25:57.634 [2024-12-11 15:02:40.124112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.634 [2024-12-11 15:02:40.124161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.634 qpair failed and we were unable to recover it. 00:25:57.634 [2024-12-11 15:02:40.124324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.634 [2024-12-11 15:02:40.124358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.634 qpair failed and we were unable to recover it. 00:25:57.634 [2024-12-11 15:02:40.124473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.634 [2024-12-11 15:02:40.124499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.634 qpair failed and we were unable to recover it. 00:25:57.634 [2024-12-11 15:02:40.124594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.634 [2024-12-11 15:02:40.124621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.634 qpair failed and we were unable to recover it. 00:25:57.634 [2024-12-11 15:02:40.124763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.634 [2024-12-11 15:02:40.124789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.634 qpair failed and we were unable to recover it. 00:25:57.634 [2024-12-11 15:02:40.124872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.634 [2024-12-11 15:02:40.124898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.634 qpair failed and we were unable to recover it. 00:25:57.634 [2024-12-11 15:02:40.124976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.634 [2024-12-11 15:02:40.125001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.634 qpair failed and we were unable to recover it. 00:25:57.634 [2024-12-11 15:02:40.125108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.634 [2024-12-11 15:02:40.125134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.634 qpair failed and we were unable to recover it. 00:25:57.634 [2024-12-11 15:02:40.125248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.634 [2024-12-11 15:02:40.125273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.634 qpair failed and we were unable to recover it. 00:25:57.634 [2024-12-11 15:02:40.125386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.634 [2024-12-11 15:02:40.125412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.634 qpair failed and we were unable to recover it. 00:25:57.634 [2024-12-11 15:02:40.125532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.634 [2024-12-11 15:02:40.125572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.634 qpair failed and we were unable to recover it. 00:25:57.634 [2024-12-11 15:02:40.125684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.634 [2024-12-11 15:02:40.125711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.634 qpair failed and we were unable to recover it. 00:25:57.634 [2024-12-11 15:02:40.125856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.634 [2024-12-11 15:02:40.125882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.634 qpair failed and we were unable to recover it. 00:25:57.634 [2024-12-11 15:02:40.126062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.634 [2024-12-11 15:02:40.126113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.634 qpair failed and we were unable to recover it. 00:25:57.634 [2024-12-11 15:02:40.126201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.634 [2024-12-11 15:02:40.126227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.634 qpair failed and we were unable to recover it. 00:25:57.634 [2024-12-11 15:02:40.126341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.634 [2024-12-11 15:02:40.126367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.634 qpair failed and we were unable to recover it. 00:25:57.635 [2024-12-11 15:02:40.126484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.635 [2024-12-11 15:02:40.126512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.635 qpair failed and we were unable to recover it. 00:25:57.635 [2024-12-11 15:02:40.126638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.635 [2024-12-11 15:02:40.126665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.635 qpair failed and we were unable to recover it. 00:25:57.635 [2024-12-11 15:02:40.126784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.635 [2024-12-11 15:02:40.126810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.635 qpair failed and we were unable to recover it. 00:25:57.635 [2024-12-11 15:02:40.126927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.635 [2024-12-11 15:02:40.126953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.635 qpair failed and we were unable to recover it. 00:25:57.635 [2024-12-11 15:02:40.127071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.635 [2024-12-11 15:02:40.127097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.635 qpair failed and we were unable to recover it. 00:25:57.635 [2024-12-11 15:02:40.127240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.635 [2024-12-11 15:02:40.127266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.635 qpair failed and we were unable to recover it. 00:25:57.635 [2024-12-11 15:02:40.127361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.635 [2024-12-11 15:02:40.127387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.635 qpair failed and we were unable to recover it. 00:25:57.635 [2024-12-11 15:02:40.127514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.635 [2024-12-11 15:02:40.127562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.635 qpair failed and we were unable to recover it. 00:25:57.635 [2024-12-11 15:02:40.127670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.635 [2024-12-11 15:02:40.127698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.635 qpair failed and we were unable to recover it. 00:25:57.635 [2024-12-11 15:02:40.127787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.635 [2024-12-11 15:02:40.127813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.635 qpair failed and we were unable to recover it. 00:25:57.635 [2024-12-11 15:02:40.127894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.635 [2024-12-11 15:02:40.127920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.635 qpair failed and we were unable to recover it. 00:25:57.635 [2024-12-11 15:02:40.128008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.635 [2024-12-11 15:02:40.128034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.635 qpair failed and we were unable to recover it. 00:25:57.635 [2024-12-11 15:02:40.128176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.635 [2024-12-11 15:02:40.128205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.635 qpair failed and we were unable to recover it. 00:25:57.635 [2024-12-11 15:02:40.128327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.635 [2024-12-11 15:02:40.128355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.635 qpair failed and we were unable to recover it. 00:25:57.635 [2024-12-11 15:02:40.128495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.635 [2024-12-11 15:02:40.128522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.635 qpair failed and we were unable to recover it. 00:25:57.635 [2024-12-11 15:02:40.128623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.635 [2024-12-11 15:02:40.128651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.635 qpair failed and we were unable to recover it. 00:25:57.635 [2024-12-11 15:02:40.128797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.635 [2024-12-11 15:02:40.128823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.635 qpair failed and we were unable to recover it. 00:25:57.635 [2024-12-11 15:02:40.128939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.635 [2024-12-11 15:02:40.128966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.635 qpair failed and we were unable to recover it. 00:25:57.635 [2024-12-11 15:02:40.129146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.635 [2024-12-11 15:02:40.129203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.635 qpair failed and we were unable to recover it. 00:25:57.635 [2024-12-11 15:02:40.129313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.635 [2024-12-11 15:02:40.129339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.635 qpair failed and we were unable to recover it. 00:25:57.635 [2024-12-11 15:02:40.129481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.635 [2024-12-11 15:02:40.129508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.635 qpair failed and we were unable to recover it. 00:25:57.635 [2024-12-11 15:02:40.129638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.635 [2024-12-11 15:02:40.129671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.635 qpair failed and we were unable to recover it. 00:25:57.635 [2024-12-11 15:02:40.129764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.635 [2024-12-11 15:02:40.129791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.635 qpair failed and we were unable to recover it. 00:25:57.635 [2024-12-11 15:02:40.129904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.635 [2024-12-11 15:02:40.129929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.635 qpair failed and we were unable to recover it. 00:25:57.635 [2024-12-11 15:02:40.130020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.635 [2024-12-11 15:02:40.130045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.635 qpair failed and we were unable to recover it. 00:25:57.635 [2024-12-11 15:02:40.130227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.635 [2024-12-11 15:02:40.130276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.635 qpair failed and we were unable to recover it. 00:25:57.635 [2024-12-11 15:02:40.130363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.635 [2024-12-11 15:02:40.130388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.635 qpair failed and we were unable to recover it. 00:25:57.635 [2024-12-11 15:02:40.130472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.635 [2024-12-11 15:02:40.130497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.635 qpair failed and we were unable to recover it. 00:25:57.635 [2024-12-11 15:02:40.130667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.635 [2024-12-11 15:02:40.130710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.635 qpair failed and we were unable to recover it. 00:25:57.635 [2024-12-11 15:02:40.130851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.635 [2024-12-11 15:02:40.130881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.635 qpair failed and we were unable to recover it. 00:25:57.635 [2024-12-11 15:02:40.131002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.635 [2024-12-11 15:02:40.131030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.635 qpair failed and we were unable to recover it. 00:25:57.635 [2024-12-11 15:02:40.131143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.635 [2024-12-11 15:02:40.131170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.635 qpair failed and we were unable to recover it. 00:25:57.635 [2024-12-11 15:02:40.131256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.635 [2024-12-11 15:02:40.131283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.635 qpair failed and we were unable to recover it. 00:25:57.635 [2024-12-11 15:02:40.131398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.635 [2024-12-11 15:02:40.131427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.635 qpair failed and we were unable to recover it. 00:25:57.635 [2024-12-11 15:02:40.131540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.635 [2024-12-11 15:02:40.131575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.635 qpair failed and we were unable to recover it. 00:25:57.635 [2024-12-11 15:02:40.131665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.635 [2024-12-11 15:02:40.131691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.635 qpair failed and we were unable to recover it. 00:25:57.635 [2024-12-11 15:02:40.131803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.635 [2024-12-11 15:02:40.131828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.635 qpair failed and we were unable to recover it. 00:25:57.635 [2024-12-11 15:02:40.131916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.635 [2024-12-11 15:02:40.131941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.635 qpair failed and we were unable to recover it. 00:25:57.635 [2024-12-11 15:02:40.132050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.635 [2024-12-11 15:02:40.132075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.635 qpair failed and we were unable to recover it. 00:25:57.636 [2024-12-11 15:02:40.132169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.636 [2024-12-11 15:02:40.132198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.636 qpair failed and we were unable to recover it. 00:25:57.636 [2024-12-11 15:02:40.132312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.636 [2024-12-11 15:02:40.132340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.636 qpair failed and we were unable to recover it. 00:25:57.636 [2024-12-11 15:02:40.132464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.636 [2024-12-11 15:02:40.132490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.636 qpair failed and we were unable to recover it. 00:25:57.636 [2024-12-11 15:02:40.132631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.636 [2024-12-11 15:02:40.132658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.636 qpair failed and we were unable to recover it. 00:25:57.636 [2024-12-11 15:02:40.132733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.636 [2024-12-11 15:02:40.132758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.636 qpair failed and we were unable to recover it. 00:25:57.636 [2024-12-11 15:02:40.132870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.636 [2024-12-11 15:02:40.132903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.636 qpair failed and we were unable to recover it. 00:25:57.636 [2024-12-11 15:02:40.133024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.636 [2024-12-11 15:02:40.133050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.636 qpair failed and we were unable to recover it. 00:25:57.636 [2024-12-11 15:02:40.133126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.636 [2024-12-11 15:02:40.133152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.636 qpair failed and we were unable to recover it. 00:25:57.636 [2024-12-11 15:02:40.133268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.636 [2024-12-11 15:02:40.133294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.636 qpair failed and we were unable to recover it. 00:25:57.636 [2024-12-11 15:02:40.133384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.636 [2024-12-11 15:02:40.133410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.636 qpair failed and we were unable to recover it. 00:25:57.636 [2024-12-11 15:02:40.133535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.636 [2024-12-11 15:02:40.133571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.636 qpair failed and we were unable to recover it. 00:25:57.636 [2024-12-11 15:02:40.133689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.636 [2024-12-11 15:02:40.133716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.636 qpair failed and we were unable to recover it. 00:25:57.636 [2024-12-11 15:02:40.133798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.636 [2024-12-11 15:02:40.133825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.636 qpair failed and we were unable to recover it. 00:25:57.636 [2024-12-11 15:02:40.133941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.636 [2024-12-11 15:02:40.133974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.636 qpair failed and we were unable to recover it. 00:25:57.636 [2024-12-11 15:02:40.134102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.636 [2024-12-11 15:02:40.134129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.636 qpair failed and we were unable to recover it. 00:25:57.636 [2024-12-11 15:02:40.134246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.636 [2024-12-11 15:02:40.134273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.636 qpair failed and we were unable to recover it. 00:25:57.636 [2024-12-11 15:02:40.134364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.636 [2024-12-11 15:02:40.134390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.636 qpair failed and we were unable to recover it. 00:25:57.636 [2024-12-11 15:02:40.134467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.636 [2024-12-11 15:02:40.134493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.636 qpair failed and we were unable to recover it. 00:25:57.636 [2024-12-11 15:02:40.134643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.636 [2024-12-11 15:02:40.134671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.636 qpair failed and we were unable to recover it. 00:25:57.636 [2024-12-11 15:02:40.134811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.636 [2024-12-11 15:02:40.134837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.636 qpair failed and we were unable to recover it. 00:25:57.636 [2024-12-11 15:02:40.134929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.636 [2024-12-11 15:02:40.134955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.636 qpair failed and we were unable to recover it. 00:25:57.636 [2024-12-11 15:02:40.135070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.636 [2024-12-11 15:02:40.135103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.636 qpair failed and we were unable to recover it. 00:25:57.636 [2024-12-11 15:02:40.135193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.636 [2024-12-11 15:02:40.135223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.636 qpair failed and we were unable to recover it. 00:25:57.636 [2024-12-11 15:02:40.135365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.636 [2024-12-11 15:02:40.135391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.636 qpair failed and we were unable to recover it. 00:25:57.636 [2024-12-11 15:02:40.135509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.636 [2024-12-11 15:02:40.135535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.636 qpair failed and we were unable to recover it. 00:25:57.636 [2024-12-11 15:02:40.135635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.636 [2024-12-11 15:02:40.135661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.636 qpair failed and we were unable to recover it. 00:25:57.636 [2024-12-11 15:02:40.135778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.636 [2024-12-11 15:02:40.135804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.636 qpair failed and we were unable to recover it. 00:25:57.636 [2024-12-11 15:02:40.135944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.636 [2024-12-11 15:02:40.135970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.636 qpair failed and we were unable to recover it. 00:25:57.636 [2024-12-11 15:02:40.136125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.636 [2024-12-11 15:02:40.136165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.636 qpair failed and we were unable to recover it. 00:25:57.636 [2024-12-11 15:02:40.136263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.636 [2024-12-11 15:02:40.136290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.636 qpair failed and we were unable to recover it. 00:25:57.636 [2024-12-11 15:02:40.136409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.636 [2024-12-11 15:02:40.136435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.636 qpair failed and we were unable to recover it. 00:25:57.636 [2024-12-11 15:02:40.136577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.636 [2024-12-11 15:02:40.136603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.636 qpair failed and we were unable to recover it. 00:25:57.636 [2024-12-11 15:02:40.136721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.636 [2024-12-11 15:02:40.136747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.636 qpair failed and we were unable to recover it. 00:25:57.637 [2024-12-11 15:02:40.136832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.637 [2024-12-11 15:02:40.136858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.637 qpair failed and we were unable to recover it. 00:25:57.637 [2024-12-11 15:02:40.136974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.637 [2024-12-11 15:02:40.137001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.637 qpair failed and we were unable to recover it. 00:25:57.637 [2024-12-11 15:02:40.137142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.637 [2024-12-11 15:02:40.137168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.637 qpair failed and we were unable to recover it. 00:25:57.637 [2024-12-11 15:02:40.137289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.637 [2024-12-11 15:02:40.137315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.637 qpair failed and we were unable to recover it. 00:25:57.637 [2024-12-11 15:02:40.137430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.637 [2024-12-11 15:02:40.137455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.637 qpair failed and we were unable to recover it. 00:25:57.637 [2024-12-11 15:02:40.137531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.637 [2024-12-11 15:02:40.137570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.637 qpair failed and we were unable to recover it. 00:25:57.637 [2024-12-11 15:02:40.137686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.637 [2024-12-11 15:02:40.137712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.637 qpair failed and we were unable to recover it. 00:25:57.637 [2024-12-11 15:02:40.137806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.637 [2024-12-11 15:02:40.137833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.637 qpair failed and we were unable to recover it. 00:25:57.637 [2024-12-11 15:02:40.137975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.637 [2024-12-11 15:02:40.138001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.637 qpair failed and we were unable to recover it. 00:25:57.637 [2024-12-11 15:02:40.138083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.637 [2024-12-11 15:02:40.138108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.637 qpair failed and we were unable to recover it. 00:25:57.637 [2024-12-11 15:02:40.138223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.637 [2024-12-11 15:02:40.138252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.637 qpair failed and we were unable to recover it. 00:25:57.637 [2024-12-11 15:02:40.138339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.637 [2024-12-11 15:02:40.138370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.637 qpair failed and we were unable to recover it. 00:25:57.637 [2024-12-11 15:02:40.138489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.637 [2024-12-11 15:02:40.138515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.637 qpair failed and we were unable to recover it. 00:25:57.637 [2024-12-11 15:02:40.138607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.637 [2024-12-11 15:02:40.138636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.637 qpair failed and we were unable to recover it. 00:25:57.637 [2024-12-11 15:02:40.138728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.637 [2024-12-11 15:02:40.138754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.637 qpair failed and we were unable to recover it. 00:25:57.637 [2024-12-11 15:02:40.138871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.637 [2024-12-11 15:02:40.138896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.637 qpair failed and we were unable to recover it. 00:25:57.637 [2024-12-11 15:02:40.138987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.637 [2024-12-11 15:02:40.139019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.637 qpair failed and we were unable to recover it. 00:25:57.637 [2024-12-11 15:02:40.139096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.637 [2024-12-11 15:02:40.139122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.637 qpair failed and we were unable to recover it. 00:25:57.637 [2024-12-11 15:02:40.139263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.637 [2024-12-11 15:02:40.139289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.637 qpair failed and we were unable to recover it. 00:25:57.637 [2024-12-11 15:02:40.139379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.637 [2024-12-11 15:02:40.139404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.637 qpair failed and we were unable to recover it. 00:25:57.637 [2024-12-11 15:02:40.139513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.637 [2024-12-11 15:02:40.139538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.637 qpair failed and we were unable to recover it. 00:25:57.637 [2024-12-11 15:02:40.139663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.637 [2024-12-11 15:02:40.139688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.637 qpair failed and we were unable to recover it. 00:25:57.637 [2024-12-11 15:02:40.139768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.637 [2024-12-11 15:02:40.139793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.637 qpair failed and we were unable to recover it. 00:25:57.637 [2024-12-11 15:02:40.139905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.637 [2024-12-11 15:02:40.139930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.637 qpair failed and we were unable to recover it. 00:25:57.637 [2024-12-11 15:02:40.140010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.637 [2024-12-11 15:02:40.140035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.637 qpair failed and we were unable to recover it. 00:25:57.637 [2024-12-11 15:02:40.140114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.637 [2024-12-11 15:02:40.140142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.637 qpair failed and we were unable to recover it. 00:25:57.637 [2024-12-11 15:02:40.140286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.637 [2024-12-11 15:02:40.140312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.637 qpair failed and we were unable to recover it. 00:25:57.637 [2024-12-11 15:02:40.140408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.637 [2024-12-11 15:02:40.140446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.637 qpair failed and we were unable to recover it. 00:25:57.637 [2024-12-11 15:02:40.140575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.637 [2024-12-11 15:02:40.140618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.637 qpair failed and we were unable to recover it. 00:25:57.637 [2024-12-11 15:02:40.140707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.637 [2024-12-11 15:02:40.140739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.637 qpair failed and we were unable to recover it. 00:25:57.637 [2024-12-11 15:02:40.140843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.637 [2024-12-11 15:02:40.140871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.637 qpair failed and we were unable to recover it. 00:25:57.637 [2024-12-11 15:02:40.140983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.637 [2024-12-11 15:02:40.141010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.637 qpair failed and we were unable to recover it. 00:25:57.637 [2024-12-11 15:02:40.141129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.637 [2024-12-11 15:02:40.141154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.637 qpair failed and we were unable to recover it. 00:25:57.637 [2024-12-11 15:02:40.141272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.637 [2024-12-11 15:02:40.141300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.637 qpair failed and we were unable to recover it. 00:25:57.637 [2024-12-11 15:02:40.141418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.637 [2024-12-11 15:02:40.141445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.637 qpair failed and we were unable to recover it. 00:25:57.637 [2024-12-11 15:02:40.141586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.637 [2024-12-11 15:02:40.141615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.637 qpair failed and we were unable to recover it. 00:25:57.637 [2024-12-11 15:02:40.141705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.637 [2024-12-11 15:02:40.141731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.637 qpair failed and we were unable to recover it. 00:25:57.637 [2024-12-11 15:02:40.141868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.637 [2024-12-11 15:02:40.141930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.637 qpair failed and we were unable to recover it. 00:25:57.637 [2024-12-11 15:02:40.142041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.637 [2024-12-11 15:02:40.142067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.638 qpair failed and we were unable to recover it. 00:25:57.638 [2024-12-11 15:02:40.142190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.638 [2024-12-11 15:02:40.142217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.638 qpair failed and we were unable to recover it. 00:25:57.638 [2024-12-11 15:02:40.142306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.638 [2024-12-11 15:02:40.142331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.638 qpair failed and we were unable to recover it. 00:25:57.638 [2024-12-11 15:02:40.142446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.638 [2024-12-11 15:02:40.142471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.638 qpair failed and we were unable to recover it. 00:25:57.638 [2024-12-11 15:02:40.142560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.638 [2024-12-11 15:02:40.142586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.638 qpair failed and we were unable to recover it. 00:25:57.638 [2024-12-11 15:02:40.142679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.638 [2024-12-11 15:02:40.142716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.638 qpair failed and we were unable to recover it. 00:25:57.638 [2024-12-11 15:02:40.142838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.638 [2024-12-11 15:02:40.142864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.638 qpair failed and we were unable to recover it. 00:25:57.638 [2024-12-11 15:02:40.142948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.638 [2024-12-11 15:02:40.142975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.638 qpair failed and we were unable to recover it. 00:25:57.638 [2024-12-11 15:02:40.143092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.638 [2024-12-11 15:02:40.143118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.638 qpair failed and we were unable to recover it. 00:25:57.638 [2024-12-11 15:02:40.143226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.638 [2024-12-11 15:02:40.143252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.638 qpair failed and we were unable to recover it. 00:25:57.638 [2024-12-11 15:02:40.143367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.638 [2024-12-11 15:02:40.143393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.638 qpair failed and we were unable to recover it. 00:25:57.638 [2024-12-11 15:02:40.143509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.638 [2024-12-11 15:02:40.143535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.638 qpair failed and we were unable to recover it. 00:25:57.638 [2024-12-11 15:02:40.143656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.638 [2024-12-11 15:02:40.143682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.638 qpair failed and we were unable to recover it. 00:25:57.638 [2024-12-11 15:02:40.143824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.638 [2024-12-11 15:02:40.143851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.638 qpair failed and we were unable to recover it. 00:25:57.638 [2024-12-11 15:02:40.143965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.638 [2024-12-11 15:02:40.143991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.638 qpair failed and we were unable to recover it. 00:25:57.638 [2024-12-11 15:02:40.144109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.638 [2024-12-11 15:02:40.144135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.638 qpair failed and we were unable to recover it. 00:25:57.638 [2024-12-11 15:02:40.144220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.638 [2024-12-11 15:02:40.144245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.638 qpair failed and we were unable to recover it. 00:25:57.638 [2024-12-11 15:02:40.144353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.638 [2024-12-11 15:02:40.144392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.638 qpair failed and we were unable to recover it. 00:25:57.638 [2024-12-11 15:02:40.144483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.638 [2024-12-11 15:02:40.144511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.638 qpair failed and we were unable to recover it. 00:25:57.638 [2024-12-11 15:02:40.144638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.638 [2024-12-11 15:02:40.144664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.638 qpair failed and we were unable to recover it. 00:25:57.638 [2024-12-11 15:02:40.144757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.638 [2024-12-11 15:02:40.144783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.638 qpair failed and we were unable to recover it. 00:25:57.638 [2024-12-11 15:02:40.144900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.638 [2024-12-11 15:02:40.144927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.638 qpair failed and we were unable to recover it. 00:25:57.638 [2024-12-11 15:02:40.145007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.638 [2024-12-11 15:02:40.145032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.638 qpair failed and we were unable to recover it. 00:25:57.638 [2024-12-11 15:02:40.145143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.638 [2024-12-11 15:02:40.145169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.638 qpair failed and we were unable to recover it. 00:25:57.638 [2024-12-11 15:02:40.145261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.638 [2024-12-11 15:02:40.145286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.638 qpair failed and we were unable to recover it. 00:25:57.638 [2024-12-11 15:02:40.145369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.638 [2024-12-11 15:02:40.145394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.638 qpair failed and we were unable to recover it. 00:25:57.638 [2024-12-11 15:02:40.145479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.638 [2024-12-11 15:02:40.145504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.638 qpair failed and we were unable to recover it. 00:25:57.638 [2024-12-11 15:02:40.145635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.638 [2024-12-11 15:02:40.145661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.638 qpair failed and we were unable to recover it. 00:25:57.638 [2024-12-11 15:02:40.145737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.638 [2024-12-11 15:02:40.145762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.638 qpair failed and we were unable to recover it. 00:25:57.638 [2024-12-11 15:02:40.145902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.638 [2024-12-11 15:02:40.145927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.638 qpair failed and we were unable to recover it. 00:25:57.638 [2024-12-11 15:02:40.146007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.638 [2024-12-11 15:02:40.146032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.638 qpair failed and we were unable to recover it. 00:25:57.638 [2024-12-11 15:02:40.146143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.638 [2024-12-11 15:02:40.146169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.638 qpair failed and we were unable to recover it. 00:25:57.638 [2024-12-11 15:02:40.146280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.638 [2024-12-11 15:02:40.146313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.638 qpair failed and we were unable to recover it. 00:25:57.638 [2024-12-11 15:02:40.146422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.638 [2024-12-11 15:02:40.146447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.638 qpair failed and we were unable to recover it. 00:25:57.638 [2024-12-11 15:02:40.146527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.638 [2024-12-11 15:02:40.146563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.638 qpair failed and we were unable to recover it. 00:25:57.638 [2024-12-11 15:02:40.146682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.638 [2024-12-11 15:02:40.146708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.638 qpair failed and we were unable to recover it. 00:25:57.638 [2024-12-11 15:02:40.146790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.638 [2024-12-11 15:02:40.146816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.638 qpair failed and we were unable to recover it. 00:25:57.638 [2024-12-11 15:02:40.146924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.638 [2024-12-11 15:02:40.146950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.638 qpair failed and we were unable to recover it. 00:25:57.638 [2024-12-11 15:02:40.147061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.638 [2024-12-11 15:02:40.147087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.638 qpair failed and we were unable to recover it. 00:25:57.638 [2024-12-11 15:02:40.147162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.638 [2024-12-11 15:02:40.147187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.639 qpair failed and we were unable to recover it. 00:25:57.639 [2024-12-11 15:02:40.147274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.639 [2024-12-11 15:02:40.147299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.639 qpair failed and we were unable to recover it. 00:25:57.639 [2024-12-11 15:02:40.147385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.639 [2024-12-11 15:02:40.147409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.639 qpair failed and we were unable to recover it. 00:25:57.639 [2024-12-11 15:02:40.147516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.639 [2024-12-11 15:02:40.147540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.639 qpair failed and we were unable to recover it. 00:25:57.639 [2024-12-11 15:02:40.147665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.639 [2024-12-11 15:02:40.147690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.639 qpair failed and we were unable to recover it. 00:25:57.639 [2024-12-11 15:02:40.147807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.639 [2024-12-11 15:02:40.147832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.639 qpair failed and we were unable to recover it. 00:25:57.639 [2024-12-11 15:02:40.147948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.639 [2024-12-11 15:02:40.147974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.639 qpair failed and we were unable to recover it. 00:25:57.639 [2024-12-11 15:02:40.148067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.639 [2024-12-11 15:02:40.148092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.639 qpair failed and we were unable to recover it. 00:25:57.639 [2024-12-11 15:02:40.148207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.639 [2024-12-11 15:02:40.148236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.639 qpair failed and we were unable to recover it. 00:25:57.639 [2024-12-11 15:02:40.148318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.639 [2024-12-11 15:02:40.148345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.639 qpair failed and we were unable to recover it. 00:25:57.639 [2024-12-11 15:02:40.148454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.639 [2024-12-11 15:02:40.148480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.639 qpair failed and we were unable to recover it. 00:25:57.639 [2024-12-11 15:02:40.148594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.639 [2024-12-11 15:02:40.148624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.639 qpair failed and we were unable to recover it. 00:25:57.639 [2024-12-11 15:02:40.148748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.639 [2024-12-11 15:02:40.148774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.639 qpair failed and we were unable to recover it. 00:25:57.639 [2024-12-11 15:02:40.148860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.639 [2024-12-11 15:02:40.148885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.639 qpair failed and we were unable to recover it. 00:25:57.639 [2024-12-11 15:02:40.148962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.639 [2024-12-11 15:02:40.148988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.639 qpair failed and we were unable to recover it. 00:25:57.639 [2024-12-11 15:02:40.149095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.639 [2024-12-11 15:02:40.149120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.639 qpair failed and we were unable to recover it. 00:25:57.639 [2024-12-11 15:02:40.149234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.639 [2024-12-11 15:02:40.149260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.639 qpair failed and we were unable to recover it. 00:25:57.639 [2024-12-11 15:02:40.149347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.639 [2024-12-11 15:02:40.149375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.639 qpair failed and we were unable to recover it. 00:25:57.639 [2024-12-11 15:02:40.149485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.639 [2024-12-11 15:02:40.149511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.639 qpair failed and we were unable to recover it. 00:25:57.639 [2024-12-11 15:02:40.149600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.639 [2024-12-11 15:02:40.149626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.639 qpair failed and we were unable to recover it. 00:25:57.639 [2024-12-11 15:02:40.149732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.639 [2024-12-11 15:02:40.149761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.639 qpair failed and we were unable to recover it. 00:25:57.639 [2024-12-11 15:02:40.149884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.639 [2024-12-11 15:02:40.149910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.639 qpair failed and we were unable to recover it. 00:25:57.639 [2024-12-11 15:02:40.149994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.639 [2024-12-11 15:02:40.150020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.639 qpair failed and we were unable to recover it. 00:25:57.639 [2024-12-11 15:02:40.150129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.639 [2024-12-11 15:02:40.150156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.639 qpair failed and we were unable to recover it. 00:25:57.639 [2024-12-11 15:02:40.150250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.639 [2024-12-11 15:02:40.150276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.639 qpair failed and we were unable to recover it. 00:25:57.639 [2024-12-11 15:02:40.150424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.639 [2024-12-11 15:02:40.150450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.639 qpair failed and we were unable to recover it. 00:25:57.639 [2024-12-11 15:02:40.150586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.639 [2024-12-11 15:02:40.150612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.639 qpair failed and we were unable to recover it. 00:25:57.639 [2024-12-11 15:02:40.150752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.639 [2024-12-11 15:02:40.150778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.639 qpair failed and we were unable to recover it. 00:25:57.639 [2024-12-11 15:02:40.150901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.639 [2024-12-11 15:02:40.150928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.639 qpair failed and we were unable to recover it. 00:25:57.639 [2024-12-11 15:02:40.151042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.639 [2024-12-11 15:02:40.151069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.639 qpair failed and we were unable to recover it. 00:25:57.639 [2024-12-11 15:02:40.151182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.639 [2024-12-11 15:02:40.151208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.639 qpair failed and we were unable to recover it. 00:25:57.639 [2024-12-11 15:02:40.151290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.639 [2024-12-11 15:02:40.151316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.639 qpair failed and we were unable to recover it. 00:25:57.639 [2024-12-11 15:02:40.151408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.639 [2024-12-11 15:02:40.151434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.639 qpair failed and we were unable to recover it. 00:25:57.639 [2024-12-11 15:02:40.151558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.639 [2024-12-11 15:02:40.151585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.639 qpair failed and we were unable to recover it. 00:25:57.639 [2024-12-11 15:02:40.151705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.639 [2024-12-11 15:02:40.151731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.639 qpair failed and we were unable to recover it. 00:25:57.639 [2024-12-11 15:02:40.151808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.639 [2024-12-11 15:02:40.151834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.639 qpair failed and we were unable to recover it. 00:25:57.639 [2024-12-11 15:02:40.151974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.639 [2024-12-11 15:02:40.152000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.639 qpair failed and we were unable to recover it. 00:25:57.639 [2024-12-11 15:02:40.152113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.639 [2024-12-11 15:02:40.152138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.639 qpair failed and we were unable to recover it. 00:25:57.639 [2024-12-11 15:02:40.152259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.639 [2024-12-11 15:02:40.152285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.639 qpair failed and we were unable to recover it. 00:25:57.639 [2024-12-11 15:02:40.152375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.639 [2024-12-11 15:02:40.152401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.639 qpair failed and we were unable to recover it. 00:25:57.640 [2024-12-11 15:02:40.152481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.640 [2024-12-11 15:02:40.152507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.640 qpair failed and we were unable to recover it. 00:25:57.640 [2024-12-11 15:02:40.152588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.640 [2024-12-11 15:02:40.152615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.640 qpair failed and we were unable to recover it. 00:25:57.640 [2024-12-11 15:02:40.152755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.640 [2024-12-11 15:02:40.152780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.640 qpair failed and we were unable to recover it. 00:25:57.640 [2024-12-11 15:02:40.152862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.640 [2024-12-11 15:02:40.152888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.640 qpair failed and we were unable to recover it. 00:25:57.640 [2024-12-11 15:02:40.152981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.640 [2024-12-11 15:02:40.153007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.640 qpair failed and we were unable to recover it. 00:25:57.640 [2024-12-11 15:02:40.153116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.640 [2024-12-11 15:02:40.153141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.640 qpair failed and we were unable to recover it. 00:25:57.640 [2024-12-11 15:02:40.153258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.640 [2024-12-11 15:02:40.153282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.640 qpair failed and we were unable to recover it. 00:25:57.640 [2024-12-11 15:02:40.153401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.640 [2024-12-11 15:02:40.153441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.640 qpair failed and we were unable to recover it. 00:25:57.640 [2024-12-11 15:02:40.153573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.640 [2024-12-11 15:02:40.153602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.640 qpair failed and we were unable to recover it. 00:25:57.640 [2024-12-11 15:02:40.153725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.640 [2024-12-11 15:02:40.153751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.640 qpair failed and we were unable to recover it. 00:25:57.640 [2024-12-11 15:02:40.153860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.640 [2024-12-11 15:02:40.153887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.640 qpair failed and we were unable to recover it. 00:25:57.640 [2024-12-11 15:02:40.153997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.640 [2024-12-11 15:02:40.154023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.640 qpair failed and we were unable to recover it. 00:25:57.640 [2024-12-11 15:02:40.154105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.640 [2024-12-11 15:02:40.154132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.640 qpair failed and we were unable to recover it. 00:25:57.640 [2024-12-11 15:02:40.154249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.640 [2024-12-11 15:02:40.154275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.640 qpair failed and we were unable to recover it. 00:25:57.640 [2024-12-11 15:02:40.154421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.640 [2024-12-11 15:02:40.154446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.640 qpair failed and we were unable to recover it. 00:25:57.640 [2024-12-11 15:02:40.154603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.640 [2024-12-11 15:02:40.154643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.640 qpair failed and we were unable to recover it. 00:25:57.640 [2024-12-11 15:02:40.154764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.640 [2024-12-11 15:02:40.154791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.640 qpair failed and we were unable to recover it. 00:25:57.640 [2024-12-11 15:02:40.154903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.640 [2024-12-11 15:02:40.154930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.640 qpair failed and we were unable to recover it. 00:25:57.640 [2024-12-11 15:02:40.155021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.640 [2024-12-11 15:02:40.155048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.640 qpair failed and we were unable to recover it. 00:25:57.640 [2024-12-11 15:02:40.155165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.640 [2024-12-11 15:02:40.155191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.640 qpair failed and we were unable to recover it. 00:25:57.640 [2024-12-11 15:02:40.155307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.640 [2024-12-11 15:02:40.155334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.640 qpair failed and we were unable to recover it. 00:25:57.640 [2024-12-11 15:02:40.155454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.640 [2024-12-11 15:02:40.155480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.640 qpair failed and we were unable to recover it. 00:25:57.640 [2024-12-11 15:02:40.155584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.640 [2024-12-11 15:02:40.155627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.640 qpair failed and we were unable to recover it. 00:25:57.640 [2024-12-11 15:02:40.155763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.640 [2024-12-11 15:02:40.155792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.640 qpair failed and we were unable to recover it. 00:25:57.640 [2024-12-11 15:02:40.155905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.640 [2024-12-11 15:02:40.155933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.640 qpair failed and we were unable to recover it. 00:25:57.640 [2024-12-11 15:02:40.156048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.640 [2024-12-11 15:02:40.156074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.640 qpair failed and we were unable to recover it. 00:25:57.640 [2024-12-11 15:02:40.156180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.640 [2024-12-11 15:02:40.156206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.640 qpair failed and we were unable to recover it. 00:25:57.640 [2024-12-11 15:02:40.156313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.640 [2024-12-11 15:02:40.156339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.640 qpair failed and we were unable to recover it. 00:25:57.640 [2024-12-11 15:02:40.156448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.640 [2024-12-11 15:02:40.156474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.640 qpair failed and we were unable to recover it. 00:25:57.640 [2024-12-11 15:02:40.156583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.640 [2024-12-11 15:02:40.156611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.640 qpair failed and we were unable to recover it. 00:25:57.640 [2024-12-11 15:02:40.156694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.640 [2024-12-11 15:02:40.156724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.640 qpair failed and we were unable to recover it. 00:25:57.640 [2024-12-11 15:02:40.156851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.640 [2024-12-11 15:02:40.156879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.640 qpair failed and we were unable to recover it. 00:25:57.640 [2024-12-11 15:02:40.156986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.640 [2024-12-11 15:02:40.157012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.640 qpair failed and we were unable to recover it. 00:25:57.640 [2024-12-11 15:02:40.157093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.640 [2024-12-11 15:02:40.157120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.640 qpair failed and we were unable to recover it. 00:25:57.640 [2024-12-11 15:02:40.157265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.641 [2024-12-11 15:02:40.157292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.641 qpair failed and we were unable to recover it. 00:25:57.641 [2024-12-11 15:02:40.157384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.641 [2024-12-11 15:02:40.157411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.641 qpair failed and we were unable to recover it. 00:25:57.641 [2024-12-11 15:02:40.157521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.641 [2024-12-11 15:02:40.157559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.641 qpair failed and we were unable to recover it. 00:25:57.641 [2024-12-11 15:02:40.157683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.641 [2024-12-11 15:02:40.157711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.641 qpair failed and we were unable to recover it. 00:25:57.641 [2024-12-11 15:02:40.157823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.641 [2024-12-11 15:02:40.157857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.641 qpair failed and we were unable to recover it. 00:25:57.641 [2024-12-11 15:02:40.158004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.641 [2024-12-11 15:02:40.158029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.641 qpair failed and we were unable to recover it. 00:25:57.641 [2024-12-11 15:02:40.158121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.641 [2024-12-11 15:02:40.158147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.641 qpair failed and we were unable to recover it. 00:25:57.641 [2024-12-11 15:02:40.158255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.641 [2024-12-11 15:02:40.158281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.641 qpair failed and we were unable to recover it. 00:25:57.641 [2024-12-11 15:02:40.158427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.641 [2024-12-11 15:02:40.158453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.641 qpair failed and we were unable to recover it. 00:25:57.641 [2024-12-11 15:02:40.158565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.641 [2024-12-11 15:02:40.158593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.641 qpair failed and we were unable to recover it. 00:25:57.641 [2024-12-11 15:02:40.158682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.641 [2024-12-11 15:02:40.158708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.641 qpair failed and we were unable to recover it. 00:25:57.641 [2024-12-11 15:02:40.158798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.641 [2024-12-11 15:02:40.158824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.641 qpair failed and we were unable to recover it. 00:25:57.641 [2024-12-11 15:02:40.158972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.641 [2024-12-11 15:02:40.158998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.641 qpair failed and we were unable to recover it. 00:25:57.641 [2024-12-11 15:02:40.159168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.641 [2024-12-11 15:02:40.159201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.641 qpair failed and we were unable to recover it. 00:25:57.641 [2024-12-11 15:02:40.159315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.641 [2024-12-11 15:02:40.159341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.641 qpair failed and we were unable to recover it. 00:25:57.641 [2024-12-11 15:02:40.159459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.641 [2024-12-11 15:02:40.159485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.641 qpair failed and we were unable to recover it. 00:25:57.641 [2024-12-11 15:02:40.159596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.641 [2024-12-11 15:02:40.159623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.641 qpair failed and we were unable to recover it. 00:25:57.641 [2024-12-11 15:02:40.159762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.641 [2024-12-11 15:02:40.159788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.641 qpair failed and we were unable to recover it. 00:25:57.641 [2024-12-11 15:02:40.159907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.641 [2024-12-11 15:02:40.159933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.641 qpair failed and we were unable to recover it. 00:25:57.641 [2024-12-11 15:02:40.160054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.641 [2024-12-11 15:02:40.160081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.641 qpair failed and we were unable to recover it. 00:25:57.641 [2024-12-11 15:02:40.160194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.641 [2024-12-11 15:02:40.160220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.641 qpair failed and we were unable to recover it. 00:25:57.641 [2024-12-11 15:02:40.160363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.641 [2024-12-11 15:02:40.160389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.641 qpair failed and we were unable to recover it. 00:25:57.641 [2024-12-11 15:02:40.160472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.641 [2024-12-11 15:02:40.160498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.641 qpair failed and we were unable to recover it. 00:25:57.641 [2024-12-11 15:02:40.160595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.641 [2024-12-11 15:02:40.160622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.641 qpair failed and we were unable to recover it. 00:25:57.641 [2024-12-11 15:02:40.160763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.641 [2024-12-11 15:02:40.160789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.641 qpair failed and we were unable to recover it. 00:25:57.641 [2024-12-11 15:02:40.160905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.641 [2024-12-11 15:02:40.160930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.641 qpair failed and we were unable to recover it. 00:25:57.641 [2024-12-11 15:02:40.161017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.641 [2024-12-11 15:02:40.161043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.641 qpair failed and we were unable to recover it. 00:25:57.641 [2024-12-11 15:02:40.161142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.641 [2024-12-11 15:02:40.161168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.641 qpair failed and we were unable to recover it. 00:25:57.641 [2024-12-11 15:02:40.161281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.641 [2024-12-11 15:02:40.161307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.641 qpair failed and we were unable to recover it. 00:25:57.641 [2024-12-11 15:02:40.161425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.641 [2024-12-11 15:02:40.161451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.641 qpair failed and we were unable to recover it. 00:25:57.641 [2024-12-11 15:02:40.161568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.641 [2024-12-11 15:02:40.161595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.641 qpair failed and we were unable to recover it. 00:25:57.641 [2024-12-11 15:02:40.161711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.641 [2024-12-11 15:02:40.161736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.641 qpair failed and we were unable to recover it. 00:25:57.641 [2024-12-11 15:02:40.161879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.641 [2024-12-11 15:02:40.161905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.641 qpair failed and we were unable to recover it. 00:25:57.641 [2024-12-11 15:02:40.162046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.641 [2024-12-11 15:02:40.162072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.641 qpair failed and we were unable to recover it. 00:25:57.641 [2024-12-11 15:02:40.162182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.641 [2024-12-11 15:02:40.162208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.641 qpair failed and we were unable to recover it. 00:25:57.641 [2024-12-11 15:02:40.162328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.641 [2024-12-11 15:02:40.162355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.641 qpair failed and we were unable to recover it. 00:25:57.641 [2024-12-11 15:02:40.162502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.641 [2024-12-11 15:02:40.162528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.641 qpair failed and we were unable to recover it. 00:25:57.641 [2024-12-11 15:02:40.162650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.641 [2024-12-11 15:02:40.162677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.641 qpair failed and we were unable to recover it. 00:25:57.641 [2024-12-11 15:02:40.162819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.641 [2024-12-11 15:02:40.162845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.642 qpair failed and we were unable to recover it. 00:25:57.642 [2024-12-11 15:02:40.162932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.642 [2024-12-11 15:02:40.162959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.642 qpair failed and we were unable to recover it. 00:25:57.642 [2024-12-11 15:02:40.163081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.642 [2024-12-11 15:02:40.163107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.642 qpair failed and we were unable to recover it. 00:25:57.642 [2024-12-11 15:02:40.163218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.642 [2024-12-11 15:02:40.163244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.642 qpair failed and we were unable to recover it. 00:25:57.642 [2024-12-11 15:02:40.163358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.642 [2024-12-11 15:02:40.163384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.642 qpair failed and we were unable to recover it. 00:25:57.642 [2024-12-11 15:02:40.163505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.642 [2024-12-11 15:02:40.163531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.642 qpair failed and we were unable to recover it. 00:25:57.642 [2024-12-11 15:02:40.163654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.642 [2024-12-11 15:02:40.163680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.642 qpair failed and we were unable to recover it. 00:25:57.642 [2024-12-11 15:02:40.163763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.642 [2024-12-11 15:02:40.163789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.642 qpair failed and we were unable to recover it. 00:25:57.642 [2024-12-11 15:02:40.163901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.642 [2024-12-11 15:02:40.163926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.642 qpair failed and we were unable to recover it. 00:25:57.642 [2024-12-11 15:02:40.164011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.642 [2024-12-11 15:02:40.164038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.642 qpair failed and we were unable to recover it. 00:25:57.642 [2024-12-11 15:02:40.164152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.642 [2024-12-11 15:02:40.164179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.642 qpair failed and we were unable to recover it. 00:25:57.642 [2024-12-11 15:02:40.164295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.642 [2024-12-11 15:02:40.164322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.642 qpair failed and we were unable to recover it. 00:25:57.642 [2024-12-11 15:02:40.164481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.642 [2024-12-11 15:02:40.164520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.642 qpair failed and we were unable to recover it. 00:25:57.642 [2024-12-11 15:02:40.164662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.642 [2024-12-11 15:02:40.164702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.642 qpair failed and we were unable to recover it. 00:25:57.642 [2024-12-11 15:02:40.164795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.642 [2024-12-11 15:02:40.164823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.642 qpair failed and we were unable to recover it. 00:25:57.642 [2024-12-11 15:02:40.164937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.642 [2024-12-11 15:02:40.164969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.642 qpair failed and we were unable to recover it. 00:25:57.642 [2024-12-11 15:02:40.165119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.642 [2024-12-11 15:02:40.165167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.642 qpair failed and we were unable to recover it. 00:25:57.642 [2024-12-11 15:02:40.165344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.642 [2024-12-11 15:02:40.165400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.642 qpair failed and we were unable to recover it. 00:25:57.642 [2024-12-11 15:02:40.165521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.642 [2024-12-11 15:02:40.165555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.642 qpair failed and we were unable to recover it. 00:25:57.642 [2024-12-11 15:02:40.165638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.642 [2024-12-11 15:02:40.165664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.642 qpair failed and we were unable to recover it. 00:25:57.642 [2024-12-11 15:02:40.165769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.642 [2024-12-11 15:02:40.165795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.642 qpair failed and we were unable to recover it. 00:25:57.642 [2024-12-11 15:02:40.165984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.642 [2024-12-11 15:02:40.166037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.642 qpair failed and we were unable to recover it. 00:25:57.642 [2024-12-11 15:02:40.166219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.642 [2024-12-11 15:02:40.166269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.642 qpair failed and we were unable to recover it. 00:25:57.642 [2024-12-11 15:02:40.166387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.642 [2024-12-11 15:02:40.166413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.642 qpair failed and we were unable to recover it. 00:25:57.642 [2024-12-11 15:02:40.166534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.642 [2024-12-11 15:02:40.166568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.642 qpair failed and we were unable to recover it. 00:25:57.642 [2024-12-11 15:02:40.166650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.642 [2024-12-11 15:02:40.166677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.642 qpair failed and we were unable to recover it. 00:25:57.642 [2024-12-11 15:02:40.166756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.642 [2024-12-11 15:02:40.166782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.642 qpair failed and we were unable to recover it. 00:25:57.642 [2024-12-11 15:02:40.166868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.642 [2024-12-11 15:02:40.166894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.642 qpair failed and we were unable to recover it. 00:25:57.642 [2024-12-11 15:02:40.167043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.642 [2024-12-11 15:02:40.167093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.642 qpair failed and we were unable to recover it. 00:25:57.642 [2024-12-11 15:02:40.167209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.642 [2024-12-11 15:02:40.167235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.642 qpair failed and we were unable to recover it. 00:25:57.642 [2024-12-11 15:02:40.167352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.642 [2024-12-11 15:02:40.167378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.642 qpair failed and we were unable to recover it. 00:25:57.642 [2024-12-11 15:02:40.167524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.642 [2024-12-11 15:02:40.167562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.642 qpair failed and we were unable to recover it. 00:25:57.642 [2024-12-11 15:02:40.167676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.642 [2024-12-11 15:02:40.167704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.642 qpair failed and we were unable to recover it. 00:25:57.642 [2024-12-11 15:02:40.167791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.642 [2024-12-11 15:02:40.167816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.642 qpair failed and we were unable to recover it. 00:25:57.642 [2024-12-11 15:02:40.167961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.642 [2024-12-11 15:02:40.167987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.642 qpair failed and we were unable to recover it. 00:25:57.642 [2024-12-11 15:02:40.168106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.642 [2024-12-11 15:02:40.168133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.642 qpair failed and we were unable to recover it. 00:25:57.642 [2024-12-11 15:02:40.168222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.642 [2024-12-11 15:02:40.168248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.642 qpair failed and we were unable to recover it. 00:25:57.642 [2024-12-11 15:02:40.168341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.642 [2024-12-11 15:02:40.168367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.642 qpair failed and we were unable to recover it. 00:25:57.642 [2024-12-11 15:02:40.168475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.642 [2024-12-11 15:02:40.168501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.642 qpair failed and we were unable to recover it. 00:25:57.642 [2024-12-11 15:02:40.168653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.642 [2024-12-11 15:02:40.168679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.643 qpair failed and we were unable to recover it. 00:25:57.643 [2024-12-11 15:02:40.168795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.643 [2024-12-11 15:02:40.168821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.643 qpair failed and we were unable to recover it. 00:25:57.643 [2024-12-11 15:02:40.168926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.643 [2024-12-11 15:02:40.168951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.643 qpair failed and we were unable to recover it. 00:25:57.643 [2024-12-11 15:02:40.169070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.643 [2024-12-11 15:02:40.169097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.643 qpair failed and we were unable to recover it. 00:25:57.643 [2024-12-11 15:02:40.169183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.643 [2024-12-11 15:02:40.169209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.643 qpair failed and we were unable to recover it. 00:25:57.643 [2024-12-11 15:02:40.169318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.643 [2024-12-11 15:02:40.169343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.643 qpair failed and we were unable to recover it. 00:25:57.643 [2024-12-11 15:02:40.169436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.643 [2024-12-11 15:02:40.169476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.643 qpair failed and we were unable to recover it. 00:25:57.643 [2024-12-11 15:02:40.169589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.643 [2024-12-11 15:02:40.169629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.643 qpair failed and we were unable to recover it. 00:25:57.643 [2024-12-11 15:02:40.169742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.643 [2024-12-11 15:02:40.169770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.643 qpair failed and we were unable to recover it. 00:25:57.643 [2024-12-11 15:02:40.169907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.643 [2024-12-11 15:02:40.169933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.643 qpair failed and we were unable to recover it. 00:25:57.643 [2024-12-11 15:02:40.170048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.643 [2024-12-11 15:02:40.170073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.643 qpair failed and we were unable to recover it. 00:25:57.643 [2024-12-11 15:02:40.170162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.643 [2024-12-11 15:02:40.170187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.643 qpair failed and we were unable to recover it. 00:25:57.643 [2024-12-11 15:02:40.170272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.643 [2024-12-11 15:02:40.170300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.643 qpair failed and we were unable to recover it. 00:25:57.643 [2024-12-11 15:02:40.170416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.643 [2024-12-11 15:02:40.170446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.643 qpair failed and we were unable to recover it. 00:25:57.643 [2024-12-11 15:02:40.170594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.643 [2024-12-11 15:02:40.170622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.643 qpair failed and we were unable to recover it. 00:25:57.643 [2024-12-11 15:02:40.170739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.643 [2024-12-11 15:02:40.170766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.643 qpair failed and we were unable to recover it. 00:25:57.643 [2024-12-11 15:02:40.170889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.643 [2024-12-11 15:02:40.170922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.643 qpair failed and we were unable to recover it. 00:25:57.643 [2024-12-11 15:02:40.171038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.643 [2024-12-11 15:02:40.171064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.643 qpair failed and we were unable to recover it. 00:25:57.643 [2024-12-11 15:02:40.171184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.643 [2024-12-11 15:02:40.171210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.643 qpair failed and we were unable to recover it. 00:25:57.643 [2024-12-11 15:02:40.171316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.643 [2024-12-11 15:02:40.171342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.643 qpair failed and we were unable to recover it. 00:25:57.643 [2024-12-11 15:02:40.171482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.643 [2024-12-11 15:02:40.171507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.643 qpair failed and we were unable to recover it. 00:25:57.643 [2024-12-11 15:02:40.171642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.643 [2024-12-11 15:02:40.171670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.643 qpair failed and we were unable to recover it. 00:25:57.643 [2024-12-11 15:02:40.171761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.643 [2024-12-11 15:02:40.171789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.643 qpair failed and we were unable to recover it. 00:25:57.643 [2024-12-11 15:02:40.171901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.643 [2024-12-11 15:02:40.171928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.643 qpair failed and we were unable to recover it. 00:25:57.643 [2024-12-11 15:02:40.172109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.643 [2024-12-11 15:02:40.172159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.643 qpair failed and we were unable to recover it. 00:25:57.643 [2024-12-11 15:02:40.172276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.643 [2024-12-11 15:02:40.172302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.643 qpair failed and we were unable to recover it. 00:25:57.643 [2024-12-11 15:02:40.172415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.643 [2024-12-11 15:02:40.172441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.643 qpair failed and we were unable to recover it. 00:25:57.643 [2024-12-11 15:02:40.172558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.643 [2024-12-11 15:02:40.172584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.643 qpair failed and we were unable to recover it. 00:25:57.643 [2024-12-11 15:02:40.172669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.643 [2024-12-11 15:02:40.172694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.643 qpair failed and we were unable to recover it. 00:25:57.643 [2024-12-11 15:02:40.172805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.643 [2024-12-11 15:02:40.172832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.643 qpair failed and we were unable to recover it. 00:25:57.643 [2024-12-11 15:02:40.172943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.643 [2024-12-11 15:02:40.172970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.643 qpair failed and we were unable to recover it. 00:25:57.643 [2024-12-11 15:02:40.173045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.643 [2024-12-11 15:02:40.173071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.643 qpair failed and we were unable to recover it. 00:25:57.643 [2024-12-11 15:02:40.173191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.643 [2024-12-11 15:02:40.173216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.643 qpair failed and we were unable to recover it. 00:25:57.643 [2024-12-11 15:02:40.173335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.643 [2024-12-11 15:02:40.173361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.643 qpair failed and we were unable to recover it. 00:25:57.643 [2024-12-11 15:02:40.173493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.643 [2024-12-11 15:02:40.173532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.643 qpair failed and we were unable to recover it. 00:25:57.643 [2024-12-11 15:02:40.173671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.643 [2024-12-11 15:02:40.173699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.643 qpair failed and we were unable to recover it. 00:25:57.643 [2024-12-11 15:02:40.173833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.643 [2024-12-11 15:02:40.173873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.643 qpair failed and we were unable to recover it. 00:25:57.643 [2024-12-11 15:02:40.173997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.643 [2024-12-11 15:02:40.174024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.643 qpair failed and we were unable to recover it. 00:25:57.643 [2024-12-11 15:02:40.174113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.643 [2024-12-11 15:02:40.174149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.643 qpair failed and we were unable to recover it. 00:25:57.643 [2024-12-11 15:02:40.174327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.643 [2024-12-11 15:02:40.174376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.644 qpair failed and we were unable to recover it. 00:25:57.644 [2024-12-11 15:02:40.174497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.644 [2024-12-11 15:02:40.174523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.644 qpair failed and we were unable to recover it. 00:25:57.644 [2024-12-11 15:02:40.174663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.644 [2024-12-11 15:02:40.174691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.644 qpair failed and we were unable to recover it. 00:25:57.644 [2024-12-11 15:02:40.174798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.644 [2024-12-11 15:02:40.174824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.644 qpair failed and we were unable to recover it. 00:25:57.644 [2024-12-11 15:02:40.174947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.644 [2024-12-11 15:02:40.174973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.644 qpair failed and we were unable to recover it. 00:25:57.644 [2024-12-11 15:02:40.175059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.644 [2024-12-11 15:02:40.175085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.644 qpair failed and we were unable to recover it. 00:25:57.644 [2024-12-11 15:02:40.175169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.644 [2024-12-11 15:02:40.175196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.644 qpair failed and we were unable to recover it. 00:25:57.644 [2024-12-11 15:02:40.175338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.644 [2024-12-11 15:02:40.175377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.644 qpair failed and we were unable to recover it. 00:25:57.644 [2024-12-11 15:02:40.175479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.644 [2024-12-11 15:02:40.175508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.644 qpair failed and we were unable to recover it. 00:25:57.644 [2024-12-11 15:02:40.175662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.644 [2024-12-11 15:02:40.175691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.644 qpair failed and we were unable to recover it. 00:25:57.644 [2024-12-11 15:02:40.175774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.644 [2024-12-11 15:02:40.175800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.644 qpair failed and we were unable to recover it. 00:25:57.644 [2024-12-11 15:02:40.175888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.644 [2024-12-11 15:02:40.175914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.644 qpair failed and we were unable to recover it. 00:25:57.644 [2024-12-11 15:02:40.175997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.644 [2024-12-11 15:02:40.176022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.644 qpair failed and we were unable to recover it. 00:25:57.644 [2024-12-11 15:02:40.176206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.644 [2024-12-11 15:02:40.176256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.644 qpair failed and we were unable to recover it. 00:25:57.644 [2024-12-11 15:02:40.176372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.644 [2024-12-11 15:02:40.176398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.644 qpair failed and we were unable to recover it. 00:25:57.644 [2024-12-11 15:02:40.176509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.644 [2024-12-11 15:02:40.176535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.644 qpair failed and we were unable to recover it. 00:25:57.644 [2024-12-11 15:02:40.176683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.644 [2024-12-11 15:02:40.176709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.644 qpair failed and we were unable to recover it. 00:25:57.644 [2024-12-11 15:02:40.176823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.644 [2024-12-11 15:02:40.176848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.644 qpair failed and we were unable to recover it. 00:25:57.644 [2024-12-11 15:02:40.177035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.644 [2024-12-11 15:02:40.177084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.644 qpair failed and we were unable to recover it. 00:25:57.644 [2024-12-11 15:02:40.177174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.644 [2024-12-11 15:02:40.177199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.644 qpair failed and we were unable to recover it. 00:25:57.644 [2024-12-11 15:02:40.177330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.644 [2024-12-11 15:02:40.177356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.644 qpair failed and we were unable to recover it. 00:25:57.644 [2024-12-11 15:02:40.177494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.644 [2024-12-11 15:02:40.177519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.644 qpair failed and we were unable to recover it. 00:25:57.644 [2024-12-11 15:02:40.177718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.644 [2024-12-11 15:02:40.177745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.644 qpair failed and we were unable to recover it. 00:25:57.644 [2024-12-11 15:02:40.177884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.644 [2024-12-11 15:02:40.177910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.644 qpair failed and we were unable to recover it. 00:25:57.644 [2024-12-11 15:02:40.177996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.644 [2024-12-11 15:02:40.178022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.644 qpair failed and we were unable to recover it. 00:25:57.644 [2024-12-11 15:02:40.178197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.644 [2024-12-11 15:02:40.178245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.644 qpair failed and we were unable to recover it. 00:25:57.644 [2024-12-11 15:02:40.178359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.644 [2024-12-11 15:02:40.178384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.644 qpair failed and we were unable to recover it. 00:25:57.644 [2024-12-11 15:02:40.178476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.644 [2024-12-11 15:02:40.178502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.644 qpair failed and we were unable to recover it. 00:25:57.644 [2024-12-11 15:02:40.178625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.644 [2024-12-11 15:02:40.178652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.644 qpair failed and we were unable to recover it. 00:25:57.644 [2024-12-11 15:02:40.178792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.644 [2024-12-11 15:02:40.178817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.644 qpair failed and we were unable to recover it. 00:25:57.644 [2024-12-11 15:02:40.178915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.644 [2024-12-11 15:02:40.178955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.644 qpair failed and we were unable to recover it. 00:25:57.644 [2024-12-11 15:02:40.179092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.644 [2024-12-11 15:02:40.179120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.644 qpair failed and we were unable to recover it. 00:25:57.644 [2024-12-11 15:02:40.179240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.644 [2024-12-11 15:02:40.179266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.644 qpair failed and we were unable to recover it. 00:25:57.644 [2024-12-11 15:02:40.179409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.644 [2024-12-11 15:02:40.179435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.644 qpair failed and we were unable to recover it. 00:25:57.644 [2024-12-11 15:02:40.179555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.644 [2024-12-11 15:02:40.179582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.644 qpair failed and we were unable to recover it. 00:25:57.644 [2024-12-11 15:02:40.179721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.644 [2024-12-11 15:02:40.179746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.644 qpair failed and we were unable to recover it. 00:25:57.644 [2024-12-11 15:02:40.179836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.644 [2024-12-11 15:02:40.179863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.644 qpair failed and we were unable to recover it. 00:25:57.644 [2024-12-11 15:02:40.179974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.644 [2024-12-11 15:02:40.180000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.644 qpair failed and we were unable to recover it. 00:25:57.644 [2024-12-11 15:02:40.180116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.644 [2024-12-11 15:02:40.180142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.644 qpair failed and we were unable to recover it. 00:25:57.644 [2024-12-11 15:02:40.180261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.644 [2024-12-11 15:02:40.180287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.644 qpair failed and we were unable to recover it. 00:25:57.645 [2024-12-11 15:02:40.180403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.645 [2024-12-11 15:02:40.180430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.645 qpair failed and we were unable to recover it. 00:25:57.645 [2024-12-11 15:02:40.180554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.645 [2024-12-11 15:02:40.180580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.645 qpair failed and we were unable to recover it. 00:25:57.645 [2024-12-11 15:02:40.180693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.645 [2024-12-11 15:02:40.180719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.645 qpair failed and we were unable to recover it. 00:25:57.645 [2024-12-11 15:02:40.180859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.645 [2024-12-11 15:02:40.180886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.645 qpair failed and we were unable to recover it. 00:25:57.645 [2024-12-11 15:02:40.180989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.645 [2024-12-11 15:02:40.181021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.645 qpair failed and we were unable to recover it. 00:25:57.645 [2024-12-11 15:02:40.181115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.645 [2024-12-11 15:02:40.181141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.645 qpair failed and we were unable to recover it. 00:25:57.645 [2024-12-11 15:02:40.181263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.645 [2024-12-11 15:02:40.181290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.645 qpair failed and we were unable to recover it. 00:25:57.645 [2024-12-11 15:02:40.181379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.645 [2024-12-11 15:02:40.181405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.645 qpair failed and we were unable to recover it. 00:25:57.645 [2024-12-11 15:02:40.181490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.645 [2024-12-11 15:02:40.181516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.645 qpair failed and we were unable to recover it. 00:25:57.645 [2024-12-11 15:02:40.181639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.645 [2024-12-11 15:02:40.181667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.645 qpair failed and we were unable to recover it. 00:25:57.645 [2024-12-11 15:02:40.181781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.645 [2024-12-11 15:02:40.181808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.645 qpair failed and we were unable to recover it. 00:25:57.645 [2024-12-11 15:02:40.181923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.645 [2024-12-11 15:02:40.181950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.645 qpair failed and we were unable to recover it. 00:25:57.645 [2024-12-11 15:02:40.182064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.645 [2024-12-11 15:02:40.182090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.645 qpair failed and we were unable to recover it. 00:25:57.645 [2024-12-11 15:02:40.182209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.645 [2024-12-11 15:02:40.182236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.645 qpair failed and we were unable to recover it. 00:25:57.645 [2024-12-11 15:02:40.182377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.645 [2024-12-11 15:02:40.182403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.645 qpair failed and we were unable to recover it. 00:25:57.645 [2024-12-11 15:02:40.182558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.645 [2024-12-11 15:02:40.182585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.645 qpair failed and we were unable to recover it. 00:25:57.645 [2024-12-11 15:02:40.182703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.645 [2024-12-11 15:02:40.182729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.645 qpair failed and we were unable to recover it. 00:25:57.645 [2024-12-11 15:02:40.182810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.645 [2024-12-11 15:02:40.182836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.645 qpair failed and we were unable to recover it. 00:25:57.645 [2024-12-11 15:02:40.182956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.645 [2024-12-11 15:02:40.182982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.645 qpair failed and we were unable to recover it. 00:25:57.645 [2024-12-11 15:02:40.183095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.645 [2024-12-11 15:02:40.183121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.645 qpair failed and we were unable to recover it. 00:25:57.645 [2024-12-11 15:02:40.183240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.645 [2024-12-11 15:02:40.183266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.645 qpair failed and we were unable to recover it. 00:25:57.645 [2024-12-11 15:02:40.183409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.645 [2024-12-11 15:02:40.183435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.645 qpair failed and we were unable to recover it. 00:25:57.645 [2024-12-11 15:02:40.183524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.645 [2024-12-11 15:02:40.183581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.645 qpair failed and we were unable to recover it. 00:25:57.645 [2024-12-11 15:02:40.183676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.645 [2024-12-11 15:02:40.183702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.645 qpair failed and we were unable to recover it. 00:25:57.645 [2024-12-11 15:02:40.183830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.645 [2024-12-11 15:02:40.183856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.645 qpair failed and we were unable to recover it. 00:25:57.645 [2024-12-11 15:02:40.183948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.645 [2024-12-11 15:02:40.183975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.645 qpair failed and we were unable to recover it. 00:25:57.645 [2024-12-11 15:02:40.184093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.645 [2024-12-11 15:02:40.184119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.645 qpair failed and we were unable to recover it. 00:25:57.645 [2024-12-11 15:02:40.184233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.645 [2024-12-11 15:02:40.184259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.645 qpair failed and we were unable to recover it. 00:25:57.645 [2024-12-11 15:02:40.184367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.645 [2024-12-11 15:02:40.184393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.645 qpair failed and we were unable to recover it. 00:25:57.645 [2024-12-11 15:02:40.184489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.645 [2024-12-11 15:02:40.184529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.645 qpair failed and we were unable to recover it. 00:25:57.645 [2024-12-11 15:02:40.184690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.645 [2024-12-11 15:02:40.184717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.645 qpair failed and we were unable to recover it. 00:25:57.645 [2024-12-11 15:02:40.184808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.645 [2024-12-11 15:02:40.184840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.645 qpair failed and we were unable to recover it. 00:25:57.645 [2024-12-11 15:02:40.184989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.645 [2024-12-11 15:02:40.185037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.645 qpair failed and we were unable to recover it. 00:25:57.645 [2024-12-11 15:02:40.185187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.645 [2024-12-11 15:02:40.185238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.645 qpair failed and we were unable to recover it. 00:25:57.646 [2024-12-11 15:02:40.185412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.646 [2024-12-11 15:02:40.185464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.646 qpair failed and we were unable to recover it. 00:25:57.646 [2024-12-11 15:02:40.185588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.646 [2024-12-11 15:02:40.185615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.646 qpair failed and we were unable to recover it. 00:25:57.646 [2024-12-11 15:02:40.185709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.646 [2024-12-11 15:02:40.185735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.646 qpair failed and we were unable to recover it. 00:25:57.646 [2024-12-11 15:02:40.185861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.646 [2024-12-11 15:02:40.185900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.646 qpair failed and we were unable to recover it. 00:25:57.646 [2024-12-11 15:02:40.186051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.646 [2024-12-11 15:02:40.186105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.646 qpair failed and we were unable to recover it. 00:25:57.646 [2024-12-11 15:02:40.186258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.646 [2024-12-11 15:02:40.186311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.646 qpair failed and we were unable to recover it. 00:25:57.646 [2024-12-11 15:02:40.186388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.646 [2024-12-11 15:02:40.186413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.646 qpair failed and we were unable to recover it. 00:25:57.646 [2024-12-11 15:02:40.186502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.646 [2024-12-11 15:02:40.186542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.646 qpair failed and we were unable to recover it. 00:25:57.646 [2024-12-11 15:02:40.186654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.646 [2024-12-11 15:02:40.186682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.646 qpair failed and we were unable to recover it. 00:25:57.646 [2024-12-11 15:02:40.186797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.646 [2024-12-11 15:02:40.186822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.646 qpair failed and we were unable to recover it. 00:25:57.646 [2024-12-11 15:02:40.186967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.646 [2024-12-11 15:02:40.187015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.646 qpair failed and we were unable to recover it. 00:25:57.646 [2024-12-11 15:02:40.187145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.646 [2024-12-11 15:02:40.187194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.646 qpair failed and we were unable to recover it. 00:25:57.646 [2024-12-11 15:02:40.187275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.646 [2024-12-11 15:02:40.187300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.646 qpair failed and we were unable to recover it. 00:25:57.646 [2024-12-11 15:02:40.187416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.646 [2024-12-11 15:02:40.187443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.646 qpair failed and we were unable to recover it. 00:25:57.646 [2024-12-11 15:02:40.187565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.646 [2024-12-11 15:02:40.187591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.646 qpair failed and we were unable to recover it. 00:25:57.646 [2024-12-11 15:02:40.187669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.646 [2024-12-11 15:02:40.187696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.646 qpair failed and we were unable to recover it. 00:25:57.646 [2024-12-11 15:02:40.187812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.646 [2024-12-11 15:02:40.187860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.646 qpair failed and we were unable to recover it. 00:25:57.646 [2024-12-11 15:02:40.188004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.646 [2024-12-11 15:02:40.188054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.646 qpair failed and we were unable to recover it. 00:25:57.646 [2024-12-11 15:02:40.188133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.646 [2024-12-11 15:02:40.188158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.646 qpair failed and we were unable to recover it. 00:25:57.646 [2024-12-11 15:02:40.188247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.646 [2024-12-11 15:02:40.188273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.646 qpair failed and we were unable to recover it. 00:25:57.646 [2024-12-11 15:02:40.188429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.646 [2024-12-11 15:02:40.188455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.646 qpair failed and we were unable to recover it. 00:25:57.646 [2024-12-11 15:02:40.188541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.646 [2024-12-11 15:02:40.188575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.646 qpair failed and we were unable to recover it. 00:25:57.646 [2024-12-11 15:02:40.188670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.646 [2024-12-11 15:02:40.188697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.646 qpair failed and we were unable to recover it. 00:25:57.646 [2024-12-11 15:02:40.188785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.646 [2024-12-11 15:02:40.188811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.646 qpair failed and we were unable to recover it. 00:25:57.646 [2024-12-11 15:02:40.188922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.646 [2024-12-11 15:02:40.188948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.646 qpair failed and we were unable to recover it. 00:25:57.646 [2024-12-11 15:02:40.189037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.646 [2024-12-11 15:02:40.189063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.646 qpair failed and we were unable to recover it. 00:25:57.646 [2024-12-11 15:02:40.189172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.646 [2024-12-11 15:02:40.189198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.646 qpair failed and we were unable to recover it. 00:25:57.646 [2024-12-11 15:02:40.189280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.646 [2024-12-11 15:02:40.189305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.646 qpair failed and we were unable to recover it. 00:25:57.646 [2024-12-11 15:02:40.189417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.646 [2024-12-11 15:02:40.189444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.646 qpair failed and we were unable to recover it. 00:25:57.646 [2024-12-11 15:02:40.189532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.646 [2024-12-11 15:02:40.189566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.646 qpair failed and we were unable to recover it. 00:25:57.646 [2024-12-11 15:02:40.189678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.646 [2024-12-11 15:02:40.189705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.646 qpair failed and we were unable to recover it. 00:25:57.646 [2024-12-11 15:02:40.189818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.646 [2024-12-11 15:02:40.189844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.646 qpair failed and we were unable to recover it. 00:25:57.646 [2024-12-11 15:02:40.189987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.646 [2024-12-11 15:02:40.190014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.646 qpair failed and we were unable to recover it. 00:25:57.646 [2024-12-11 15:02:40.190121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.646 [2024-12-11 15:02:40.190147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.646 qpair failed and we were unable to recover it. 00:25:57.646 [2024-12-11 15:02:40.190261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.646 [2024-12-11 15:02:40.190286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.646 qpair failed and we were unable to recover it. 00:25:57.646 [2024-12-11 15:02:40.190384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.646 [2024-12-11 15:02:40.190422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.646 qpair failed and we were unable to recover it. 00:25:57.646 [2024-12-11 15:02:40.190554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.646 [2024-12-11 15:02:40.190584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.646 qpair failed and we were unable to recover it. 00:25:57.646 [2024-12-11 15:02:40.190708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.646 [2024-12-11 15:02:40.190740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.646 qpair failed and we were unable to recover it. 00:25:57.646 [2024-12-11 15:02:40.190881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.646 [2024-12-11 15:02:40.190908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.646 qpair failed and we were unable to recover it. 00:25:57.647 [2024-12-11 15:02:40.191052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.647 [2024-12-11 15:02:40.191078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.647 qpair failed and we were unable to recover it. 00:25:57.647 [2024-12-11 15:02:40.191192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.647 [2024-12-11 15:02:40.191219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.647 qpair failed and we were unable to recover it. 00:25:57.647 [2024-12-11 15:02:40.191350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.647 [2024-12-11 15:02:40.191376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.647 qpair failed and we were unable to recover it. 00:25:57.647 [2024-12-11 15:02:40.191521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.647 [2024-12-11 15:02:40.191556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.647 qpair failed and we were unable to recover it. 00:25:57.647 [2024-12-11 15:02:40.191657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.647 [2024-12-11 15:02:40.191685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.647 qpair failed and we were unable to recover it. 00:25:57.647 [2024-12-11 15:02:40.191835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.647 [2024-12-11 15:02:40.191882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.647 qpair failed and we were unable to recover it. 00:25:57.647 [2024-12-11 15:02:40.192017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.647 [2024-12-11 15:02:40.192065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.647 qpair failed and we were unable to recover it. 00:25:57.647 [2024-12-11 15:02:40.192212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.647 [2024-12-11 15:02:40.192259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.647 qpair failed and we were unable to recover it. 00:25:57.647 [2024-12-11 15:02:40.192372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.647 [2024-12-11 15:02:40.192398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.647 qpair failed and we were unable to recover it. 00:25:57.647 [2024-12-11 15:02:40.192509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.647 [2024-12-11 15:02:40.192534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.647 qpair failed and we were unable to recover it. 00:25:57.647 [2024-12-11 15:02:40.192666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.647 [2024-12-11 15:02:40.192695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.647 qpair failed and we were unable to recover it. 00:25:57.647 [2024-12-11 15:02:40.192886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.647 [2024-12-11 15:02:40.192926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.647 qpair failed and we were unable to recover it. 00:25:57.647 [2024-12-11 15:02:40.193050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.647 [2024-12-11 15:02:40.193089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.647 qpair failed and we were unable to recover it. 00:25:57.647 [2024-12-11 15:02:40.193218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.647 [2024-12-11 15:02:40.193256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.647 qpair failed and we were unable to recover it. 00:25:57.647 [2024-12-11 15:02:40.193409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.647 [2024-12-11 15:02:40.193455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.647 qpair failed and we were unable to recover it. 00:25:57.647 [2024-12-11 15:02:40.193571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.647 [2024-12-11 15:02:40.193597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.647 qpair failed and we were unable to recover it. 00:25:57.647 [2024-12-11 15:02:40.193719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.647 [2024-12-11 15:02:40.193755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.647 qpair failed and we were unable to recover it. 00:25:57.647 [2024-12-11 15:02:40.193903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.647 [2024-12-11 15:02:40.193929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.647 qpair failed and we were unable to recover it. 00:25:57.647 [2024-12-11 15:02:40.194016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.647 [2024-12-11 15:02:40.194042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.647 qpair failed and we were unable to recover it. 00:25:57.647 [2024-12-11 15:02:40.194136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.647 [2024-12-11 15:02:40.194163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.647 qpair failed and we were unable to recover it. 00:25:57.647 [2024-12-11 15:02:40.194322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.647 [2024-12-11 15:02:40.194372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.647 qpair failed and we were unable to recover it. 00:25:57.647 [2024-12-11 15:02:40.194463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.647 [2024-12-11 15:02:40.194490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.647 qpair failed and we were unable to recover it. 00:25:57.647 [2024-12-11 15:02:40.194607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.647 [2024-12-11 15:02:40.194634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.647 qpair failed and we were unable to recover it. 00:25:57.647 [2024-12-11 15:02:40.194753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.647 [2024-12-11 15:02:40.194779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.647 qpair failed and we were unable to recover it. 00:25:57.647 [2024-12-11 15:02:40.194894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.647 [2024-12-11 15:02:40.194920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.647 qpair failed and we were unable to recover it. 00:25:57.647 [2024-12-11 15:02:40.195016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.647 [2024-12-11 15:02:40.195047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.647 qpair failed and we were unable to recover it. 00:25:57.647 [2024-12-11 15:02:40.195161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.647 [2024-12-11 15:02:40.195188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.647 qpair failed and we were unable to recover it. 00:25:57.647 [2024-12-11 15:02:40.195271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.647 [2024-12-11 15:02:40.195298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.647 qpair failed and we were unable to recover it. 00:25:57.647 [2024-12-11 15:02:40.195413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.647 [2024-12-11 15:02:40.195439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.647 qpair failed and we were unable to recover it. 00:25:57.647 [2024-12-11 15:02:40.195557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.647 [2024-12-11 15:02:40.195584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.647 qpair failed and we were unable to recover it. 00:25:57.647 [2024-12-11 15:02:40.195668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.647 [2024-12-11 15:02:40.195694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.647 qpair failed and we were unable to recover it. 00:25:57.647 [2024-12-11 15:02:40.195812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.647 [2024-12-11 15:02:40.195839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.647 qpair failed and we were unable to recover it. 00:25:57.647 [2024-12-11 15:02:40.195924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.647 [2024-12-11 15:02:40.195950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.647 qpair failed and we were unable to recover it. 00:25:57.647 [2024-12-11 15:02:40.196093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.647 [2024-12-11 15:02:40.196118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.647 qpair failed and we were unable to recover it. 00:25:57.647 [2024-12-11 15:02:40.196232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.647 [2024-12-11 15:02:40.196258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.647 qpair failed and we were unable to recover it. 00:25:57.647 [2024-12-11 15:02:40.196374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.647 [2024-12-11 15:02:40.196400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.647 qpair failed and we were unable to recover it. 00:25:57.647 [2024-12-11 15:02:40.196480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.647 [2024-12-11 15:02:40.196506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.647 qpair failed and we were unable to recover it. 00:25:57.647 [2024-12-11 15:02:40.196602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.647 [2024-12-11 15:02:40.196629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.647 qpair failed and we were unable to recover it. 00:25:57.647 [2024-12-11 15:02:40.196719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.647 [2024-12-11 15:02:40.196746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.648 qpair failed and we were unable to recover it. 00:25:57.648 [2024-12-11 15:02:40.196863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.648 [2024-12-11 15:02:40.196890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.648 qpair failed and we were unable to recover it. 00:25:57.648 [2024-12-11 15:02:40.196982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.648 [2024-12-11 15:02:40.197008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.648 qpair failed and we were unable to recover it. 00:25:57.648 [2024-12-11 15:02:40.197117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.648 [2024-12-11 15:02:40.197143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.648 qpair failed and we were unable to recover it. 00:25:57.648 [2024-12-11 15:02:40.197259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.648 [2024-12-11 15:02:40.197285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.648 qpair failed and we were unable to recover it. 00:25:57.648 [2024-12-11 15:02:40.197375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.648 [2024-12-11 15:02:40.197400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.648 qpair failed and we were unable to recover it. 00:25:57.648 [2024-12-11 15:02:40.197521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.648 [2024-12-11 15:02:40.197553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.648 qpair failed and we were unable to recover it. 00:25:57.648 [2024-12-11 15:02:40.197681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.648 [2024-12-11 15:02:40.197707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.648 qpair failed and we were unable to recover it. 00:25:57.648 [2024-12-11 15:02:40.197824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.648 [2024-12-11 15:02:40.197850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.648 qpair failed and we were unable to recover it. 00:25:57.648 [2024-12-11 15:02:40.197931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.648 [2024-12-11 15:02:40.197957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.648 qpair failed and we were unable to recover it. 00:25:57.648 [2024-12-11 15:02:40.198089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.648 [2024-12-11 15:02:40.198115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.648 qpair failed and we were unable to recover it. 00:25:57.648 [2024-12-11 15:02:40.198257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.648 [2024-12-11 15:02:40.198283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.648 qpair failed and we were unable to recover it. 00:25:57.648 [2024-12-11 15:02:40.198365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.648 [2024-12-11 15:02:40.198391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.648 qpair failed and we were unable to recover it. 00:25:57.648 [2024-12-11 15:02:40.198550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.648 [2024-12-11 15:02:40.198576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.648 qpair failed and we were unable to recover it. 00:25:57.648 [2024-12-11 15:02:40.198697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.648 [2024-12-11 15:02:40.198724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.648 qpair failed and we were unable to recover it. 00:25:57.648 [2024-12-11 15:02:40.198812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.648 [2024-12-11 15:02:40.198838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.648 qpair failed and we were unable to recover it. 00:25:57.648 [2024-12-11 15:02:40.198972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.648 [2024-12-11 15:02:40.198997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.648 qpair failed and we were unable to recover it. 00:25:57.648 [2024-12-11 15:02:40.199084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.648 [2024-12-11 15:02:40.199110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.648 qpair failed and we were unable to recover it. 00:25:57.648 [2024-12-11 15:02:40.199250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.648 [2024-12-11 15:02:40.199276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.648 qpair failed and we were unable to recover it. 00:25:57.648 [2024-12-11 15:02:40.199362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.648 [2024-12-11 15:02:40.199389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.648 qpair failed and we were unable to recover it. 00:25:57.648 [2024-12-11 15:02:40.199530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.648 [2024-12-11 15:02:40.199564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.648 qpair failed and we were unable to recover it. 00:25:57.648 [2024-12-11 15:02:40.199677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.648 [2024-12-11 15:02:40.199703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.648 qpair failed and we were unable to recover it. 00:25:57.648 [2024-12-11 15:02:40.199843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.648 [2024-12-11 15:02:40.199869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.648 qpair failed and we were unable to recover it. 00:25:57.648 [2024-12-11 15:02:40.199943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.648 [2024-12-11 15:02:40.199969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.648 qpair failed and we were unable to recover it. 00:25:57.648 [2024-12-11 15:02:40.200122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.648 [2024-12-11 15:02:40.200172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.648 qpair failed and we were unable to recover it. 00:25:57.648 [2024-12-11 15:02:40.200315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.648 [2024-12-11 15:02:40.200341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.648 qpair failed and we were unable to recover it. 00:25:57.648 [2024-12-11 15:02:40.200455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.648 [2024-12-11 15:02:40.200482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.648 qpair failed and we were unable to recover it. 00:25:57.648 [2024-12-11 15:02:40.200605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.648 [2024-12-11 15:02:40.200637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.648 qpair failed and we were unable to recover it. 00:25:57.648 [2024-12-11 15:02:40.200778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.648 [2024-12-11 15:02:40.200804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.648 qpair failed and we were unable to recover it. 00:25:57.648 [2024-12-11 15:02:40.200945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.648 [2024-12-11 15:02:40.200995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.648 qpair failed and we were unable to recover it. 00:25:57.648 [2024-12-11 15:02:40.201181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.648 [2024-12-11 15:02:40.201207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.648 qpair failed and we were unable to recover it. 00:25:57.648 [2024-12-11 15:02:40.201344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.648 [2024-12-11 15:02:40.201370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.648 qpair failed and we were unable to recover it. 00:25:57.648 [2024-12-11 15:02:40.201518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.648 [2024-12-11 15:02:40.201555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.648 qpair failed and we were unable to recover it. 00:25:57.648 [2024-12-11 15:02:40.201644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.648 [2024-12-11 15:02:40.201670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.648 qpair failed and we were unable to recover it. 00:25:57.648 [2024-12-11 15:02:40.201776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.648 [2024-12-11 15:02:40.201801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.648 qpair failed and we were unable to recover it. 00:25:57.648 [2024-12-11 15:02:40.201975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.648 [2024-12-11 15:02:40.202026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.648 qpair failed and we were unable to recover it. 00:25:57.648 [2024-12-11 15:02:40.202199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.648 [2024-12-11 15:02:40.202252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.648 qpair failed and we were unable to recover it. 00:25:57.648 [2024-12-11 15:02:40.202343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.648 [2024-12-11 15:02:40.202370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.648 qpair failed and we were unable to recover it. 00:25:57.648 [2024-12-11 15:02:40.202483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.648 [2024-12-11 15:02:40.202509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.648 qpair failed and we were unable to recover it. 00:25:57.648 [2024-12-11 15:02:40.202636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.649 [2024-12-11 15:02:40.202663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.649 qpair failed and we were unable to recover it. 00:25:57.649 [2024-12-11 15:02:40.202741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.649 [2024-12-11 15:02:40.202767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.649 qpair failed and we were unable to recover it. 00:25:57.649 [2024-12-11 15:02:40.202855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.649 [2024-12-11 15:02:40.202883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.649 qpair failed and we were unable to recover it. 00:25:57.649 [2024-12-11 15:02:40.202998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.649 [2024-12-11 15:02:40.203025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.649 qpair failed and we were unable to recover it. 00:25:57.649 [2024-12-11 15:02:40.203113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.649 [2024-12-11 15:02:40.203139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.649 qpair failed and we were unable to recover it. 00:25:57.649 [2024-12-11 15:02:40.203236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.649 [2024-12-11 15:02:40.203263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.649 qpair failed and we were unable to recover it. 00:25:57.649 [2024-12-11 15:02:40.203346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.649 [2024-12-11 15:02:40.203372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.649 qpair failed and we were unable to recover it. 00:25:57.649 [2024-12-11 15:02:40.203487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.649 [2024-12-11 15:02:40.203514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.649 qpair failed and we were unable to recover it. 00:25:57.649 [2024-12-11 15:02:40.203640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.649 [2024-12-11 15:02:40.203668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.649 qpair failed and we were unable to recover it. 00:25:57.649 [2024-12-11 15:02:40.203833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.649 [2024-12-11 15:02:40.203859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.649 qpair failed and we were unable to recover it. 00:25:57.649 [2024-12-11 15:02:40.203999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.649 [2024-12-11 15:02:40.204025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.649 qpair failed and we were unable to recover it. 00:25:57.649 [2024-12-11 15:02:40.204119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.649 [2024-12-11 15:02:40.204145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.649 qpair failed and we were unable to recover it. 00:25:57.649 [2024-12-11 15:02:40.204235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.649 [2024-12-11 15:02:40.204261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.649 qpair failed and we were unable to recover it. 00:25:57.649 [2024-12-11 15:02:40.204375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.649 [2024-12-11 15:02:40.204402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.649 qpair failed and we were unable to recover it. 00:25:57.649 [2024-12-11 15:02:40.204494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.649 [2024-12-11 15:02:40.204521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.649 qpair failed and we were unable to recover it. 00:25:57.649 [2024-12-11 15:02:40.204678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.649 [2024-12-11 15:02:40.204704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.649 qpair failed and we were unable to recover it. 00:25:57.649 [2024-12-11 15:02:40.204866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.649 [2024-12-11 15:02:40.204917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.649 qpair failed and we were unable to recover it. 00:25:57.649 [2024-12-11 15:02:40.205026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.649 [2024-12-11 15:02:40.205052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.649 qpair failed and we were unable to recover it. 00:25:57.649 [2024-12-11 15:02:40.205173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.649 [2024-12-11 15:02:40.205198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.649 qpair failed and we were unable to recover it. 00:25:57.649 [2024-12-11 15:02:40.205346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.649 [2024-12-11 15:02:40.205372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.649 qpair failed and we were unable to recover it. 00:25:57.649 [2024-12-11 15:02:40.205484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.649 [2024-12-11 15:02:40.205511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.649 qpair failed and we were unable to recover it. 00:25:57.649 [2024-12-11 15:02:40.205659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.649 [2024-12-11 15:02:40.205685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.649 qpair failed and we were unable to recover it. 00:25:57.649 [2024-12-11 15:02:40.205825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.649 [2024-12-11 15:02:40.205851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.649 qpair failed and we were unable to recover it. 00:25:57.649 [2024-12-11 15:02:40.205947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.649 [2024-12-11 15:02:40.205973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.649 qpair failed and we were unable to recover it. 00:25:57.649 [2024-12-11 15:02:40.206115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.649 [2024-12-11 15:02:40.206141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.649 qpair failed and we were unable to recover it. 00:25:57.649 [2024-12-11 15:02:40.206254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.649 [2024-12-11 15:02:40.206281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.649 qpair failed and we were unable to recover it. 00:25:57.649 [2024-12-11 15:02:40.206364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.649 [2024-12-11 15:02:40.206391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.649 qpair failed and we were unable to recover it. 00:25:57.649 [2024-12-11 15:02:40.206496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.649 [2024-12-11 15:02:40.206535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.649 qpair failed and we were unable to recover it. 00:25:57.649 [2024-12-11 15:02:40.206672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.649 [2024-12-11 15:02:40.206705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.649 qpair failed and we were unable to recover it. 00:25:57.649 [2024-12-11 15:02:40.206851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.649 [2024-12-11 15:02:40.206877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.649 qpair failed and we were unable to recover it. 00:25:57.649 [2024-12-11 15:02:40.206967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.649 [2024-12-11 15:02:40.206993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.649 qpair failed and we were unable to recover it. 00:25:57.649 [2024-12-11 15:02:40.207135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.650 [2024-12-11 15:02:40.207161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.650 qpair failed and we were unable to recover it. 00:25:57.650 [2024-12-11 15:02:40.207236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.650 [2024-12-11 15:02:40.207261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.650 qpair failed and we were unable to recover it. 00:25:57.650 [2024-12-11 15:02:40.207346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.650 [2024-12-11 15:02:40.207371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.650 qpair failed and we were unable to recover it. 00:25:57.650 [2024-12-11 15:02:40.207459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.650 [2024-12-11 15:02:40.207485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.650 qpair failed and we were unable to recover it. 00:25:57.650 [2024-12-11 15:02:40.207592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.650 [2024-12-11 15:02:40.207619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.650 qpair failed and we were unable to recover it. 00:25:57.650 [2024-12-11 15:02:40.207759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.650 [2024-12-11 15:02:40.207784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.650 qpair failed and we were unable to recover it. 00:25:57.650 [2024-12-11 15:02:40.207927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.650 [2024-12-11 15:02:40.207952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.650 qpair failed and we were unable to recover it. 00:25:57.650 [2024-12-11 15:02:40.208060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.650 [2024-12-11 15:02:40.208110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.650 qpair failed and we were unable to recover it. 00:25:57.650 [2024-12-11 15:02:40.208214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.650 [2024-12-11 15:02:40.208239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.650 qpair failed and we were unable to recover it. 00:25:57.650 [2024-12-11 15:02:40.208392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.650 [2024-12-11 15:02:40.208417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.650 qpair failed and we were unable to recover it. 00:25:57.650 [2024-12-11 15:02:40.208533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.650 [2024-12-11 15:02:40.208569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.650 qpair failed and we were unable to recover it. 00:25:57.650 [2024-12-11 15:02:40.208768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.650 [2024-12-11 15:02:40.208796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.650 qpair failed and we were unable to recover it. 00:25:57.650 [2024-12-11 15:02:40.208913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.650 [2024-12-11 15:02:40.208940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.650 qpair failed and we were unable to recover it. 00:25:57.650 [2024-12-11 15:02:40.209053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.650 [2024-12-11 15:02:40.209079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.650 qpair failed and we were unable to recover it. 00:25:57.650 [2024-12-11 15:02:40.209195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.650 [2024-12-11 15:02:40.209221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.650 qpair failed and we were unable to recover it. 00:25:57.650 [2024-12-11 15:02:40.209329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.650 [2024-12-11 15:02:40.209355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.650 qpair failed and we were unable to recover it. 00:25:57.650 [2024-12-11 15:02:40.209497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.650 [2024-12-11 15:02:40.209523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.650 qpair failed and we were unable to recover it. 00:25:57.650 [2024-12-11 15:02:40.209625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.650 [2024-12-11 15:02:40.209651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.650 qpair failed and we were unable to recover it. 00:25:57.650 [2024-12-11 15:02:40.209767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.650 [2024-12-11 15:02:40.209793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.650 qpair failed and we were unable to recover it. 00:25:57.650 [2024-12-11 15:02:40.209872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.650 [2024-12-11 15:02:40.209899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.650 qpair failed and we were unable to recover it. 00:25:57.650 [2024-12-11 15:02:40.210016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.650 [2024-12-11 15:02:40.210042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.650 qpair failed and we were unable to recover it. 00:25:57.650 [2024-12-11 15:02:40.210133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.650 [2024-12-11 15:02:40.210160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.650 qpair failed and we were unable to recover it. 00:25:57.650 [2024-12-11 15:02:40.210301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.650 [2024-12-11 15:02:40.210326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.650 qpair failed and we were unable to recover it. 00:25:57.650 [2024-12-11 15:02:40.210441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.650 [2024-12-11 15:02:40.210467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.650 qpair failed and we were unable to recover it. 00:25:57.650 [2024-12-11 15:02:40.210551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.650 [2024-12-11 15:02:40.210585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.650 qpair failed and we were unable to recover it. 00:25:57.650 [2024-12-11 15:02:40.210703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.650 [2024-12-11 15:02:40.210729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.650 qpair failed and we were unable to recover it. 00:25:57.650 [2024-12-11 15:02:40.210870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.650 [2024-12-11 15:02:40.210896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.650 qpair failed and we were unable to recover it. 00:25:57.650 [2024-12-11 15:02:40.211016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.650 [2024-12-11 15:02:40.211042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.650 qpair failed and we were unable to recover it. 00:25:57.650 [2024-12-11 15:02:40.211128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.650 [2024-12-11 15:02:40.211154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.650 qpair failed and we were unable to recover it. 00:25:57.650 [2024-12-11 15:02:40.211292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.650 [2024-12-11 15:02:40.211318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.650 qpair failed and we were unable to recover it. 00:25:57.650 [2024-12-11 15:02:40.211434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.650 [2024-12-11 15:02:40.211459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.650 qpair failed and we were unable to recover it. 00:25:57.650 [2024-12-11 15:02:40.211569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.650 [2024-12-11 15:02:40.211595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.650 qpair failed and we were unable to recover it. 00:25:57.650 [2024-12-11 15:02:40.211707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.650 [2024-12-11 15:02:40.211733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.650 qpair failed and we were unable to recover it. 00:25:57.650 [2024-12-11 15:02:40.211842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.650 [2024-12-11 15:02:40.211868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.650 qpair failed and we were unable to recover it. 00:25:57.650 [2024-12-11 15:02:40.211986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.650 [2024-12-11 15:02:40.212012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.650 qpair failed and we were unable to recover it. 00:25:57.650 [2024-12-11 15:02:40.212091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.650 [2024-12-11 15:02:40.212117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.650 qpair failed and we were unable to recover it. 00:25:57.650 [2024-12-11 15:02:40.212199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.650 [2024-12-11 15:02:40.212225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.651 qpair failed and we were unable to recover it. 00:25:57.651 [2024-12-11 15:02:40.212365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.651 [2024-12-11 15:02:40.212391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.651 qpair failed and we were unable to recover it. 00:25:57.651 [2024-12-11 15:02:40.212520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.651 [2024-12-11 15:02:40.212570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.651 qpair failed and we were unable to recover it. 00:25:57.651 [2024-12-11 15:02:40.212670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.651 [2024-12-11 15:02:40.212697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.651 qpair failed and we were unable to recover it. 00:25:57.651 [2024-12-11 15:02:40.212807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.651 [2024-12-11 15:02:40.212834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.651 qpair failed and we were unable to recover it. 00:25:57.651 [2024-12-11 15:02:40.212971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.651 [2024-12-11 15:02:40.213021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.651 qpair failed and we were unable to recover it. 00:25:57.651 [2024-12-11 15:02:40.213166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.651 [2024-12-11 15:02:40.213213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.651 qpair failed and we were unable to recover it. 00:25:57.651 [2024-12-11 15:02:40.213347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.651 [2024-12-11 15:02:40.213396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.651 qpair failed and we were unable to recover it. 00:25:57.651 [2024-12-11 15:02:40.213487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.651 [2024-12-11 15:02:40.213513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.651 qpair failed and we were unable to recover it. 00:25:57.651 [2024-12-11 15:02:40.213670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.651 [2024-12-11 15:02:40.213696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.651 qpair failed and we were unable to recover it. 00:25:57.651 [2024-12-11 15:02:40.213843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.651 [2024-12-11 15:02:40.213869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.651 qpair failed and we were unable to recover it. 00:25:57.651 [2024-12-11 15:02:40.213983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.651 [2024-12-11 15:02:40.214031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.651 qpair failed and we were unable to recover it. 00:25:57.651 [2024-12-11 15:02:40.214176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.651 [2024-12-11 15:02:40.214201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.651 qpair failed and we were unable to recover it. 00:25:57.651 [2024-12-11 15:02:40.214313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.651 [2024-12-11 15:02:40.214338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.651 qpair failed and we were unable to recover it. 00:25:57.651 [2024-12-11 15:02:40.214426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.651 [2024-12-11 15:02:40.214455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.651 qpair failed and we were unable to recover it. 00:25:57.651 [2024-12-11 15:02:40.214572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.651 [2024-12-11 15:02:40.214600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.651 qpair failed and we were unable to recover it. 00:25:57.651 [2024-12-11 15:02:40.214692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.651 [2024-12-11 15:02:40.214719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.651 qpair failed and we were unable to recover it. 00:25:57.651 [2024-12-11 15:02:40.214844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.651 [2024-12-11 15:02:40.214870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.651 qpair failed and we were unable to recover it. 00:25:57.651 [2024-12-11 15:02:40.214989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.651 [2024-12-11 15:02:40.215015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.651 qpair failed and we were unable to recover it. 00:25:57.651 [2024-12-11 15:02:40.215161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.651 [2024-12-11 15:02:40.215187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.651 qpair failed and we were unable to recover it. 00:25:57.651 [2024-12-11 15:02:40.215304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.651 [2024-12-11 15:02:40.215330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.651 qpair failed and we were unable to recover it. 00:25:57.651 [2024-12-11 15:02:40.215456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.651 [2024-12-11 15:02:40.215482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.651 qpair failed and we were unable to recover it. 00:25:57.651 [2024-12-11 15:02:40.215599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.651 [2024-12-11 15:02:40.215626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.651 qpair failed and we were unable to recover it. 00:25:57.651 [2024-12-11 15:02:40.215781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.651 [2024-12-11 15:02:40.215809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.651 qpair failed and we were unable to recover it. 00:25:57.651 [2024-12-11 15:02:40.215919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.651 [2024-12-11 15:02:40.215968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.651 qpair failed and we were unable to recover it. 00:25:57.651 [2024-12-11 15:02:40.216127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.651 [2024-12-11 15:02:40.216152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.651 qpair failed and we were unable to recover it. 00:25:57.651 [2024-12-11 15:02:40.216270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.651 [2024-12-11 15:02:40.216295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.651 qpair failed and we were unable to recover it. 00:25:57.651 [2024-12-11 15:02:40.216434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.651 [2024-12-11 15:02:40.216459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.651 qpair failed and we were unable to recover it. 00:25:57.651 [2024-12-11 15:02:40.216574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.651 [2024-12-11 15:02:40.216600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.651 qpair failed and we were unable to recover it. 00:25:57.651 [2024-12-11 15:02:40.216692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.651 [2024-12-11 15:02:40.216717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.651 qpair failed and we were unable to recover it. 00:25:57.651 [2024-12-11 15:02:40.216857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.651 [2024-12-11 15:02:40.216883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.651 qpair failed and we were unable to recover it. 00:25:57.651 [2024-12-11 15:02:40.216972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.651 [2024-12-11 15:02:40.216997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.651 qpair failed and we were unable to recover it. 00:25:57.651 [2024-12-11 15:02:40.217105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.651 [2024-12-11 15:02:40.217155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.651 qpair failed and we were unable to recover it. 00:25:57.651 [2024-12-11 15:02:40.217271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.651 [2024-12-11 15:02:40.217296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.651 qpair failed and we were unable to recover it. 00:25:57.651 [2024-12-11 15:02:40.217387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.651 [2024-12-11 15:02:40.217412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.651 qpair failed and we were unable to recover it. 00:25:57.651 [2024-12-11 15:02:40.217557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.651 [2024-12-11 15:02:40.217585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.651 qpair failed and we were unable to recover it. 00:25:57.651 [2024-12-11 15:02:40.217714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.651 [2024-12-11 15:02:40.217741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.651 qpair failed and we were unable to recover it. 00:25:57.651 [2024-12-11 15:02:40.217831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.651 [2024-12-11 15:02:40.217857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.652 qpair failed and we were unable to recover it. 00:25:57.652 [2024-12-11 15:02:40.217973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.652 [2024-12-11 15:02:40.218000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.652 qpair failed and we were unable to recover it. 00:25:57.652 [2024-12-11 15:02:40.218085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.652 [2024-12-11 15:02:40.218113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.652 qpair failed and we were unable to recover it. 00:25:57.652 [2024-12-11 15:02:40.218228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.652 [2024-12-11 15:02:40.218255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.652 qpair failed and we were unable to recover it. 00:25:57.652 [2024-12-11 15:02:40.218365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.652 [2024-12-11 15:02:40.218391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.652 qpair failed and we were unable to recover it. 00:25:57.652 [2024-12-11 15:02:40.218483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.652 [2024-12-11 15:02:40.218509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.652 qpair failed and we were unable to recover it. 00:25:57.652 [2024-12-11 15:02:40.218615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.652 [2024-12-11 15:02:40.218641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.652 qpair failed and we were unable to recover it. 00:25:57.652 [2024-12-11 15:02:40.218753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.652 [2024-12-11 15:02:40.218778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.652 qpair failed and we were unable to recover it. 00:25:57.652 [2024-12-11 15:02:40.218868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.652 [2024-12-11 15:02:40.218893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.652 qpair failed and we were unable to recover it. 00:25:57.652 [2024-12-11 15:02:40.219024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.652 [2024-12-11 15:02:40.219073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.652 qpair failed and we were unable to recover it. 00:25:57.652 [2024-12-11 15:02:40.219164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.652 [2024-12-11 15:02:40.219190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.652 qpair failed and we were unable to recover it. 00:25:57.652 [2024-12-11 15:02:40.219333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.652 [2024-12-11 15:02:40.219358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.652 qpair failed and we were unable to recover it. 00:25:57.652 [2024-12-11 15:02:40.219477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.652 [2024-12-11 15:02:40.219502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.652 qpair failed and we were unable to recover it. 00:25:57.652 [2024-12-11 15:02:40.219629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.652 [2024-12-11 15:02:40.219657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.652 qpair failed and we were unable to recover it. 00:25:57.652 [2024-12-11 15:02:40.219795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.652 [2024-12-11 15:02:40.219822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.652 qpair failed and we were unable to recover it. 00:25:57.652 [2024-12-11 15:02:40.219910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.652 [2024-12-11 15:02:40.219937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.652 qpair failed and we were unable to recover it. 00:25:57.652 [2024-12-11 15:02:40.220051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.652 [2024-12-11 15:02:40.220077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.652 qpair failed and we were unable to recover it. 00:25:57.652 [2024-12-11 15:02:40.220193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.652 [2024-12-11 15:02:40.220218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.652 qpair failed and we were unable to recover it. 00:25:57.652 [2024-12-11 15:02:40.220329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.652 [2024-12-11 15:02:40.220356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.652 qpair failed and we were unable to recover it. 00:25:57.652 [2024-12-11 15:02:40.220442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.652 [2024-12-11 15:02:40.220468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.652 qpair failed and we were unable to recover it. 00:25:57.652 [2024-12-11 15:02:40.220589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.652 [2024-12-11 15:02:40.220615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.652 qpair failed and we were unable to recover it. 00:25:57.652 [2024-12-11 15:02:40.220723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.652 [2024-12-11 15:02:40.220748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.652 qpair failed and we were unable to recover it. 00:25:57.652 [2024-12-11 15:02:40.220841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.652 [2024-12-11 15:02:40.220866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.652 qpair failed and we were unable to recover it. 00:25:57.652 [2024-12-11 15:02:40.220949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.652 [2024-12-11 15:02:40.220975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.652 qpair failed and we were unable to recover it. 00:25:57.652 [2024-12-11 15:02:40.221077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.652 [2024-12-11 15:02:40.221103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.652 qpair failed and we were unable to recover it. 00:25:57.652 [2024-12-11 15:02:40.221188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.652 [2024-12-11 15:02:40.221213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.652 qpair failed and we were unable to recover it. 00:25:57.652 [2024-12-11 15:02:40.221329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.652 [2024-12-11 15:02:40.221354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.652 qpair failed and we were unable to recover it. 00:25:57.652 [2024-12-11 15:02:40.221466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.652 [2024-12-11 15:02:40.221492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.652 qpair failed and we were unable to recover it. 00:25:57.652 [2024-12-11 15:02:40.221584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.652 [2024-12-11 15:02:40.221612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.652 qpair failed and we were unable to recover it. 00:25:57.652 [2024-12-11 15:02:40.221755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.652 [2024-12-11 15:02:40.221780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.652 qpair failed and we were unable to recover it. 00:25:57.652 [2024-12-11 15:02:40.221895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.652 [2024-12-11 15:02:40.221921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.652 qpair failed and we were unable to recover it. 00:25:57.652 [2024-12-11 15:02:40.222037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.652 [2024-12-11 15:02:40.222064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.652 qpair failed and we were unable to recover it. 00:25:57.652 [2024-12-11 15:02:40.222181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.652 [2024-12-11 15:02:40.222208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.652 qpair failed and we were unable to recover it. 00:25:57.652 [2024-12-11 15:02:40.222298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.652 [2024-12-11 15:02:40.222324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.652 qpair failed and we were unable to recover it. 00:25:57.652 [2024-12-11 15:02:40.222410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.652 [2024-12-11 15:02:40.222437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.652 qpair failed and we were unable to recover it. 00:25:57.652 [2024-12-11 15:02:40.222526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.652 [2024-12-11 15:02:40.222559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.652 qpair failed and we were unable to recover it. 00:25:57.652 [2024-12-11 15:02:40.222704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.652 [2024-12-11 15:02:40.222730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.652 qpair failed and we were unable to recover it. 00:25:57.652 [2024-12-11 15:02:40.222843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.652 [2024-12-11 15:02:40.222868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.652 qpair failed and we were unable to recover it. 00:25:57.652 [2024-12-11 15:02:40.222986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.653 [2024-12-11 15:02:40.223011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.653 qpair failed and we were unable to recover it. 00:25:57.653 [2024-12-11 15:02:40.223154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.653 [2024-12-11 15:02:40.223203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.653 qpair failed and we were unable to recover it. 00:25:57.653 [2024-12-11 15:02:40.223319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.653 [2024-12-11 15:02:40.223344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.653 qpair failed and we were unable to recover it. 00:25:57.653 [2024-12-11 15:02:40.223452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.653 [2024-12-11 15:02:40.223478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.653 qpair failed and we were unable to recover it. 00:25:57.653 [2024-12-11 15:02:40.223615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.653 [2024-12-11 15:02:40.223641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.653 qpair failed and we were unable to recover it. 00:25:57.653 [2024-12-11 15:02:40.223745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.653 [2024-12-11 15:02:40.223771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.653 qpair failed and we were unable to recover it. 00:25:57.653 [2024-12-11 15:02:40.223912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.653 [2024-12-11 15:02:40.223937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.653 qpair failed and we were unable to recover it. 00:25:57.653 [2024-12-11 15:02:40.224030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.653 [2024-12-11 15:02:40.224055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.653 qpair failed and we were unable to recover it. 00:25:57.653 [2024-12-11 15:02:40.224205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.653 [2024-12-11 15:02:40.224233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.653 qpair failed and we were unable to recover it. 00:25:57.653 [2024-12-11 15:02:40.224356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.653 [2024-12-11 15:02:40.224382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.653 qpair failed and we were unable to recover it. 00:25:57.653 [2024-12-11 15:02:40.224496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.653 [2024-12-11 15:02:40.224522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.653 qpair failed and we were unable to recover it. 00:25:57.653 [2024-12-11 15:02:40.224622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.653 [2024-12-11 15:02:40.224649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.653 qpair failed and we were unable to recover it. 00:25:57.653 [2024-12-11 15:02:40.224766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.653 [2024-12-11 15:02:40.224792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.653 qpair failed and we were unable to recover it. 00:25:57.653 [2024-12-11 15:02:40.224931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.653 [2024-12-11 15:02:40.224956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.653 qpair failed and we were unable to recover it. 00:25:57.653 [2024-12-11 15:02:40.225065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.653 [2024-12-11 15:02:40.225090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.653 qpair failed and we were unable to recover it. 00:25:57.653 [2024-12-11 15:02:40.225192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.653 [2024-12-11 15:02:40.225218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.653 qpair failed and we were unable to recover it. 00:25:57.653 [2024-12-11 15:02:40.225330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.653 [2024-12-11 15:02:40.225355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.653 qpair failed and we were unable to recover it. 00:25:57.653 [2024-12-11 15:02:40.225467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.653 [2024-12-11 15:02:40.225492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.653 qpair failed and we were unable to recover it. 00:25:57.653 [2024-12-11 15:02:40.225588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.653 [2024-12-11 15:02:40.225614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.653 qpair failed and we were unable to recover it. 00:25:57.653 [2024-12-11 15:02:40.225756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.653 [2024-12-11 15:02:40.225781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.653 qpair failed and we were unable to recover it. 00:25:57.653 [2024-12-11 15:02:40.225893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.653 [2024-12-11 15:02:40.225919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.653 qpair failed and we were unable to recover it. 00:25:57.653 [2024-12-11 15:02:40.226065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.653 [2024-12-11 15:02:40.226091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.653 qpair failed and we were unable to recover it. 00:25:57.653 [2024-12-11 15:02:40.226199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.653 [2024-12-11 15:02:40.226225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.653 qpair failed and we were unable to recover it. 00:25:57.653 [2024-12-11 15:02:40.226340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.653 [2024-12-11 15:02:40.226365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.653 qpair failed and we were unable to recover it. 00:25:57.653 [2024-12-11 15:02:40.226479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.653 [2024-12-11 15:02:40.226505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.653 qpair failed and we were unable to recover it. 00:25:57.653 [2024-12-11 15:02:40.226625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.653 [2024-12-11 15:02:40.226651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.653 qpair failed and we were unable to recover it. 00:25:57.653 [2024-12-11 15:02:40.226749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.653 [2024-12-11 15:02:40.226777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.653 qpair failed and we were unable to recover it. 00:25:57.653 [2024-12-11 15:02:40.226888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.653 [2024-12-11 15:02:40.226914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.653 qpair failed and we were unable to recover it. 00:25:57.653 [2024-12-11 15:02:40.226998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.653 [2024-12-11 15:02:40.227024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.653 qpair failed and we were unable to recover it. 00:25:57.653 [2024-12-11 15:02:40.227139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.653 [2024-12-11 15:02:40.227165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.653 qpair failed and we were unable to recover it. 00:25:57.653 [2024-12-11 15:02:40.227311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.653 [2024-12-11 15:02:40.227337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.653 qpair failed and we were unable to recover it. 00:25:57.653 [2024-12-11 15:02:40.227461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.653 [2024-12-11 15:02:40.227487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.653 qpair failed and we were unable to recover it. 00:25:57.653 [2024-12-11 15:02:40.227607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.653 [2024-12-11 15:02:40.227634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.653 qpair failed and we were unable to recover it. 00:25:57.653 [2024-12-11 15:02:40.227719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.653 [2024-12-11 15:02:40.227744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.653 qpair failed and we were unable to recover it. 00:25:57.653 [2024-12-11 15:02:40.227860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.653 [2024-12-11 15:02:40.227893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.653 qpair failed and we were unable to recover it. 00:25:57.653 [2024-12-11 15:02:40.228003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.653 [2024-12-11 15:02:40.228029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.653 qpair failed and we were unable to recover it. 00:25:57.653 [2024-12-11 15:02:40.228135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.653 [2024-12-11 15:02:40.228160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.653 qpair failed and we were unable to recover it. 00:25:57.653 [2024-12-11 15:02:40.228244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.653 [2024-12-11 15:02:40.228270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.653 qpair failed and we were unable to recover it. 00:25:57.653 [2024-12-11 15:02:40.228358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.653 [2024-12-11 15:02:40.228386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.653 qpair failed and we were unable to recover it. 00:25:57.654 [2024-12-11 15:02:40.228527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.654 [2024-12-11 15:02:40.228558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.654 qpair failed and we were unable to recover it. 00:25:57.654 [2024-12-11 15:02:40.228673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.654 [2024-12-11 15:02:40.228699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.654 qpair failed and we were unable to recover it. 00:25:57.654 [2024-12-11 15:02:40.228815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.654 [2024-12-11 15:02:40.228841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.654 qpair failed and we were unable to recover it. 00:25:57.654 [2024-12-11 15:02:40.228983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.654 [2024-12-11 15:02:40.229009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.654 qpair failed and we were unable to recover it. 00:25:57.654 [2024-12-11 15:02:40.229126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.654 [2024-12-11 15:02:40.229152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.654 qpair failed and we were unable to recover it. 00:25:57.654 [2024-12-11 15:02:40.229240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.654 [2024-12-11 15:02:40.229266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.654 qpair failed and we were unable to recover it. 00:25:57.654 [2024-12-11 15:02:40.229345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.654 [2024-12-11 15:02:40.229371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.654 qpair failed and we were unable to recover it. 00:25:57.654 [2024-12-11 15:02:40.229513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.654 [2024-12-11 15:02:40.229539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.654 qpair failed and we were unable to recover it. 00:25:57.654 [2024-12-11 15:02:40.229635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.654 [2024-12-11 15:02:40.229661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.654 qpair failed and we were unable to recover it. 00:25:57.654 [2024-12-11 15:02:40.229749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.654 [2024-12-11 15:02:40.229777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.654 qpair failed and we were unable to recover it. 00:25:57.654 [2024-12-11 15:02:40.229891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.654 [2024-12-11 15:02:40.229916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.654 qpair failed and we were unable to recover it. 00:25:57.654 [2024-12-11 15:02:40.230058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.654 [2024-12-11 15:02:40.230084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.654 qpair failed and we were unable to recover it. 00:25:57.654 [2024-12-11 15:02:40.230175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.654 [2024-12-11 15:02:40.230202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.654 qpair failed and we were unable to recover it. 00:25:57.654 [2024-12-11 15:02:40.230318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.654 [2024-12-11 15:02:40.230344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.654 qpair failed and we were unable to recover it. 00:25:57.654 [2024-12-11 15:02:40.230427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.654 [2024-12-11 15:02:40.230453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.654 qpair failed and we were unable to recover it. 00:25:57.654 [2024-12-11 15:02:40.230541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.654 [2024-12-11 15:02:40.230580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.654 qpair failed and we were unable to recover it. 00:25:57.654 [2024-12-11 15:02:40.230669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.654 [2024-12-11 15:02:40.230695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.654 qpair failed and we were unable to recover it. 00:25:57.654 [2024-12-11 15:02:40.230816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.654 [2024-12-11 15:02:40.230842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.654 qpair failed and we were unable to recover it. 00:25:57.654 [2024-12-11 15:02:40.230935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.654 [2024-12-11 15:02:40.230961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.654 qpair failed and we were unable to recover it. 00:25:57.654 [2024-12-11 15:02:40.231047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.654 [2024-12-11 15:02:40.231073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.654 qpair failed and we were unable to recover it. 00:25:57.654 [2024-12-11 15:02:40.231187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.654 [2024-12-11 15:02:40.231214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.654 qpair failed and we were unable to recover it. 00:25:57.654 [2024-12-11 15:02:40.231353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.654 [2024-12-11 15:02:40.231379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.654 qpair failed and we were unable to recover it. 00:25:57.654 [2024-12-11 15:02:40.231470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.654 [2024-12-11 15:02:40.231501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.654 qpair failed and we were unable to recover it. 00:25:57.654 [2024-12-11 15:02:40.231651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.654 [2024-12-11 15:02:40.231690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.654 qpair failed and we were unable to recover it. 00:25:57.654 [2024-12-11 15:02:40.231813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.654 [2024-12-11 15:02:40.231840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.654 qpair failed and we were unable to recover it. 00:25:57.654 [2024-12-11 15:02:40.231978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.654 [2024-12-11 15:02:40.232003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.654 qpair failed and we were unable to recover it. 00:25:57.654 [2024-12-11 15:02:40.232089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.654 [2024-12-11 15:02:40.232115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.654 qpair failed and we were unable to recover it. 00:25:57.654 [2024-12-11 15:02:40.232253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.654 [2024-12-11 15:02:40.232303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.654 qpair failed and we were unable to recover it. 00:25:57.654 [2024-12-11 15:02:40.232413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.654 [2024-12-11 15:02:40.232438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.654 qpair failed and we were unable to recover it. 00:25:57.654 [2024-12-11 15:02:40.232526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.654 [2024-12-11 15:02:40.232562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.654 qpair failed and we were unable to recover it. 00:25:57.654 [2024-12-11 15:02:40.232672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.654 [2024-12-11 15:02:40.232697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.654 qpair failed and we were unable to recover it. 00:25:57.654 [2024-12-11 15:02:40.232842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.654 [2024-12-11 15:02:40.232867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.654 qpair failed and we were unable to recover it. 00:25:57.654 [2024-12-11 15:02:40.232986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.655 [2024-12-11 15:02:40.233015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.655 qpair failed and we were unable to recover it. 00:25:57.655 [2024-12-11 15:02:40.233109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.655 [2024-12-11 15:02:40.233135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.655 qpair failed and we were unable to recover it. 00:25:57.655 [2024-12-11 15:02:40.233213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.655 [2024-12-11 15:02:40.233240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.655 qpair failed and we were unable to recover it. 00:25:57.655 [2024-12-11 15:02:40.233379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.655 [2024-12-11 15:02:40.233406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.655 qpair failed and we were unable to recover it. 00:25:57.655 [2024-12-11 15:02:40.233526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.655 [2024-12-11 15:02:40.233557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.655 qpair failed and we were unable to recover it. 00:25:57.655 [2024-12-11 15:02:40.233685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.655 [2024-12-11 15:02:40.233724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.655 qpair failed and we were unable to recover it. 00:25:57.655 [2024-12-11 15:02:40.233831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.655 [2024-12-11 15:02:40.233868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.655 qpair failed and we were unable to recover it. 00:25:57.655 [2024-12-11 15:02:40.234032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.655 [2024-12-11 15:02:40.234081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.655 qpair failed and we were unable to recover it. 00:25:57.655 [2024-12-11 15:02:40.234253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.655 [2024-12-11 15:02:40.234299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.655 qpair failed and we were unable to recover it. 00:25:57.655 [2024-12-11 15:02:40.234417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.655 [2024-12-11 15:02:40.234442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.655 qpair failed and we were unable to recover it. 00:25:57.655 [2024-12-11 15:02:40.234559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.655 [2024-12-11 15:02:40.234585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.655 qpair failed and we were unable to recover it. 00:25:57.655 [2024-12-11 15:02:40.234722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.655 [2024-12-11 15:02:40.234772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.655 qpair failed and we were unable to recover it. 00:25:57.655 [2024-12-11 15:02:40.234886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.655 [2024-12-11 15:02:40.234932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.655 qpair failed and we were unable to recover it. 00:25:57.655 [2024-12-11 15:02:40.235063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.655 [2024-12-11 15:02:40.235109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.655 qpair failed and we were unable to recover it. 00:25:57.655 [2024-12-11 15:02:40.235193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.655 [2024-12-11 15:02:40.235221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.655 qpair failed and we were unable to recover it. 00:25:57.655 [2024-12-11 15:02:40.235334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.655 [2024-12-11 15:02:40.235361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.655 qpair failed and we were unable to recover it. 00:25:57.655 [2024-12-11 15:02:40.235453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.655 [2024-12-11 15:02:40.235479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.655 qpair failed and we were unable to recover it. 00:25:57.655 [2024-12-11 15:02:40.235601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.655 [2024-12-11 15:02:40.235632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.655 qpair failed and we were unable to recover it. 00:25:57.655 [2024-12-11 15:02:40.235744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.655 [2024-12-11 15:02:40.235770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.655 qpair failed and we were unable to recover it. 00:25:57.655 [2024-12-11 15:02:40.235860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.655 [2024-12-11 15:02:40.235886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.655 qpair failed and we were unable to recover it. 00:25:57.655 [2024-12-11 15:02:40.235960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.655 [2024-12-11 15:02:40.235986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.655 qpair failed and we were unable to recover it. 00:25:57.655 [2024-12-11 15:02:40.236128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.655 [2024-12-11 15:02:40.236154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.655 qpair failed and we were unable to recover it. 00:25:57.655 [2024-12-11 15:02:40.236269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.655 [2024-12-11 15:02:40.236296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.655 qpair failed and we were unable to recover it. 00:25:57.655 [2024-12-11 15:02:40.236394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.655 [2024-12-11 15:02:40.236422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.655 qpair failed and we were unable to recover it. 00:25:57.655 [2024-12-11 15:02:40.236535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.655 [2024-12-11 15:02:40.236570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.655 qpair failed and we were unable to recover it. 00:25:57.655 [2024-12-11 15:02:40.236711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.655 [2024-12-11 15:02:40.236758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.655 qpair failed and we were unable to recover it. 00:25:57.655 [2024-12-11 15:02:40.236896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.655 [2024-12-11 15:02:40.236942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.655 qpair failed and we were unable to recover it. 00:25:57.655 [2024-12-11 15:02:40.237085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.655 [2024-12-11 15:02:40.237133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.655 qpair failed and we were unable to recover it. 00:25:57.655 [2024-12-11 15:02:40.237276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.655 [2024-12-11 15:02:40.237302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.655 qpair failed and we were unable to recover it. 00:25:57.655 [2024-12-11 15:02:40.237415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.655 [2024-12-11 15:02:40.237440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.655 qpair failed and we were unable to recover it. 00:25:57.655 [2024-12-11 15:02:40.237533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.655 [2024-12-11 15:02:40.237569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.655 qpair failed and we were unable to recover it. 00:25:57.655 [2024-12-11 15:02:40.237659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.655 [2024-12-11 15:02:40.237685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.655 qpair failed and we were unable to recover it. 00:25:57.655 [2024-12-11 15:02:40.237796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.655 [2024-12-11 15:02:40.237821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.655 qpair failed and we were unable to recover it. 00:25:57.655 [2024-12-11 15:02:40.237912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.655 [2024-12-11 15:02:40.237938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.655 qpair failed and we were unable to recover it. 00:25:57.655 [2024-12-11 15:02:40.238021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.655 [2024-12-11 15:02:40.238046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.655 qpair failed and we were unable to recover it. 00:25:57.655 [2024-12-11 15:02:40.238131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.655 [2024-12-11 15:02:40.238156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.655 qpair failed and we were unable to recover it. 00:25:57.655 [2024-12-11 15:02:40.238298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.655 [2024-12-11 15:02:40.238324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.655 qpair failed and we were unable to recover it. 00:25:57.655 [2024-12-11 15:02:40.238421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.656 [2024-12-11 15:02:40.238446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.656 qpair failed and we were unable to recover it. 00:25:57.656 [2024-12-11 15:02:40.238569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.656 [2024-12-11 15:02:40.238597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.656 qpair failed and we were unable to recover it. 00:25:57.656 [2024-12-11 15:02:40.238716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.656 [2024-12-11 15:02:40.238742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.656 qpair failed and we were unable to recover it. 00:25:57.656 [2024-12-11 15:02:40.238863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.656 [2024-12-11 15:02:40.238889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.656 qpair failed and we were unable to recover it. 00:25:57.656 [2024-12-11 15:02:40.239032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.656 [2024-12-11 15:02:40.239058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.656 qpair failed and we were unable to recover it. 00:25:57.656 [2024-12-11 15:02:40.239144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.656 [2024-12-11 15:02:40.239170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.656 qpair failed and we were unable to recover it. 00:25:57.656 [2024-12-11 15:02:40.239262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.656 [2024-12-11 15:02:40.239288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.656 qpair failed and we were unable to recover it. 00:25:57.656 [2024-12-11 15:02:40.239405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.656 [2024-12-11 15:02:40.239432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.656 qpair failed and we were unable to recover it. 00:25:57.656 [2024-12-11 15:02:40.239514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.656 [2024-12-11 15:02:40.239540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.656 qpair failed and we were unable to recover it. 00:25:57.656 [2024-12-11 15:02:40.239635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.656 [2024-12-11 15:02:40.239661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.656 qpair failed and we were unable to recover it. 00:25:57.656 [2024-12-11 15:02:40.239795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.656 [2024-12-11 15:02:40.239845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.656 qpair failed and we were unable to recover it. 00:25:57.656 [2024-12-11 15:02:40.239985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.656 [2024-12-11 15:02:40.240010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.656 qpair failed and we were unable to recover it. 00:25:57.656 [2024-12-11 15:02:40.240123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.656 [2024-12-11 15:02:40.240170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.656 qpair failed and we were unable to recover it. 00:25:57.656 [2024-12-11 15:02:40.240312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.656 [2024-12-11 15:02:40.240338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.656 qpair failed and we were unable to recover it. 00:25:57.656 [2024-12-11 15:02:40.240419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.656 [2024-12-11 15:02:40.240444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.656 qpair failed and we were unable to recover it. 00:25:57.656 [2024-12-11 15:02:40.240556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.656 [2024-12-11 15:02:40.240582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.656 qpair failed and we were unable to recover it. 00:25:57.656 [2024-12-11 15:02:40.240679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.656 [2024-12-11 15:02:40.240706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.656 qpair failed and we were unable to recover it. 00:25:57.656 [2024-12-11 15:02:40.240846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.656 [2024-12-11 15:02:40.240893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.656 qpair failed and we were unable to recover it. 00:25:57.656 [2024-12-11 15:02:40.240983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.656 [2024-12-11 15:02:40.241009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.656 qpair failed and we were unable to recover it. 00:25:57.656 [2024-12-11 15:02:40.241111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.656 [2024-12-11 15:02:40.241160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.656 qpair failed and we were unable to recover it. 00:25:57.656 [2024-12-11 15:02:40.241248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.656 [2024-12-11 15:02:40.241274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.656 qpair failed and we were unable to recover it. 00:25:57.656 [2024-12-11 15:02:40.241423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.656 [2024-12-11 15:02:40.241449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.656 qpair failed and we were unable to recover it. 00:25:57.656 [2024-12-11 15:02:40.241592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.656 [2024-12-11 15:02:40.241619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.656 qpair failed and we were unable to recover it. 00:25:57.656 [2024-12-11 15:02:40.241786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.656 [2024-12-11 15:02:40.241834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.656 qpair failed and we were unable to recover it. 00:25:57.656 [2024-12-11 15:02:40.241979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.656 [2024-12-11 15:02:40.242027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.656 qpair failed and we were unable to recover it. 00:25:57.656 [2024-12-11 15:02:40.242164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.656 [2024-12-11 15:02:40.242213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.656 qpair failed and we were unable to recover it. 00:25:57.656 [2024-12-11 15:02:40.242330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.656 [2024-12-11 15:02:40.242355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.656 qpair failed and we were unable to recover it. 00:25:57.656 [2024-12-11 15:02:40.242448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.656 [2024-12-11 15:02:40.242473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.656 qpair failed and we were unable to recover it. 00:25:57.656 [2024-12-11 15:02:40.242563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.656 [2024-12-11 15:02:40.242589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.656 qpair failed and we were unable to recover it. 00:25:57.656 [2024-12-11 15:02:40.242702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.656 [2024-12-11 15:02:40.242728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.656 qpair failed and we were unable to recover it. 00:25:57.656 [2024-12-11 15:02:40.242831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.656 [2024-12-11 15:02:40.242874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.656 qpair failed and we were unable to recover it. 00:25:57.656 [2024-12-11 15:02:40.243007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.656 [2024-12-11 15:02:40.243032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.656 qpair failed and we were unable to recover it. 00:25:57.656 [2024-12-11 15:02:40.243151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.656 [2024-12-11 15:02:40.243176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.656 qpair failed and we were unable to recover it. 00:25:57.656 [2024-12-11 15:02:40.243289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.656 [2024-12-11 15:02:40.243314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.656 qpair failed and we were unable to recover it. 00:25:57.656 [2024-12-11 15:02:40.243471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.656 [2024-12-11 15:02:40.243510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.656 qpair failed and we were unable to recover it. 00:25:57.656 [2024-12-11 15:02:40.243638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.656 [2024-12-11 15:02:40.243667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.656 qpair failed and we were unable to recover it. 00:25:57.656 [2024-12-11 15:02:40.243840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.656 [2024-12-11 15:02:40.243893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.656 qpair failed and we were unable to recover it. 00:25:57.657 [2024-12-11 15:02:40.244002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.657 [2024-12-11 15:02:40.244052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.657 qpair failed and we were unable to recover it. 00:25:57.657 [2024-12-11 15:02:40.244164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.657 [2024-12-11 15:02:40.244212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.657 qpair failed and we were unable to recover it. 00:25:57.657 [2024-12-11 15:02:40.244327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.657 [2024-12-11 15:02:40.244353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.657 qpair failed and we were unable to recover it. 00:25:57.657 [2024-12-11 15:02:40.244468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.657 [2024-12-11 15:02:40.244495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.657 qpair failed and we were unable to recover it. 00:25:57.657 [2024-12-11 15:02:40.244623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.657 [2024-12-11 15:02:40.244649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.657 qpair failed and we were unable to recover it. 00:25:57.657 [2024-12-11 15:02:40.244757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.657 [2024-12-11 15:02:40.244782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.657 qpair failed and we were unable to recover it. 00:25:57.657 [2024-12-11 15:02:40.244896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.657 [2024-12-11 15:02:40.244921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.657 qpair failed and we were unable to recover it. 00:25:57.657 [2024-12-11 15:02:40.245050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.657 [2024-12-11 15:02:40.245084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.657 qpair failed and we were unable to recover it. 00:25:57.657 [2024-12-11 15:02:40.245186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.657 [2024-12-11 15:02:40.245211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.657 qpair failed and we were unable to recover it. 00:25:57.657 [2024-12-11 15:02:40.245303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.657 [2024-12-11 15:02:40.245328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.657 qpair failed and we were unable to recover it. 00:25:57.657 [2024-12-11 15:02:40.245405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.657 [2024-12-11 15:02:40.245430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.657 qpair failed and we were unable to recover it. 00:25:57.657 [2024-12-11 15:02:40.245581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.657 [2024-12-11 15:02:40.245607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.657 qpair failed and we were unable to recover it. 00:25:57.657 [2024-12-11 15:02:40.245752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.657 [2024-12-11 15:02:40.245777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.657 qpair failed and we were unable to recover it. 00:25:57.657 [2024-12-11 15:02:40.245860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.657 [2024-12-11 15:02:40.245885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.657 qpair failed and we were unable to recover it. 00:25:57.657 [2024-12-11 15:02:40.246016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.657 [2024-12-11 15:02:40.246041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.657 qpair failed and we were unable to recover it. 00:25:57.657 [2024-12-11 15:02:40.246151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.657 [2024-12-11 15:02:40.246176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.657 qpair failed and we were unable to recover it. 00:25:57.657 [2024-12-11 15:02:40.246291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.657 [2024-12-11 15:02:40.246316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.657 qpair failed and we were unable to recover it. 00:25:57.657 [2024-12-11 15:02:40.246434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.657 [2024-12-11 15:02:40.246459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.657 qpair failed and we were unable to recover it. 00:25:57.657 [2024-12-11 15:02:40.246577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.657 [2024-12-11 15:02:40.246603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.657 qpair failed and we were unable to recover it. 00:25:57.657 [2024-12-11 15:02:40.246691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.657 [2024-12-11 15:02:40.246716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.657 qpair failed and we were unable to recover it. 00:25:57.657 [2024-12-11 15:02:40.246839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.657 [2024-12-11 15:02:40.246865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.657 qpair failed and we were unable to recover it. 00:25:57.657 [2024-12-11 15:02:40.246952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.657 [2024-12-11 15:02:40.246977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.657 qpair failed and we were unable to recover it. 00:25:57.657 [2024-12-11 15:02:40.247093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.657 [2024-12-11 15:02:40.247118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.657 qpair failed and we were unable to recover it. 00:25:57.657 [2024-12-11 15:02:40.247191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.657 [2024-12-11 15:02:40.247216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.657 qpair failed and we were unable to recover it. 00:25:57.657 [2024-12-11 15:02:40.247333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.657 [2024-12-11 15:02:40.247367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.657 qpair failed and we were unable to recover it. 00:25:57.657 [2024-12-11 15:02:40.247453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.657 [2024-12-11 15:02:40.247480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.657 qpair failed and we were unable to recover it. 00:25:57.657 [2024-12-11 15:02:40.247604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.657 [2024-12-11 15:02:40.247630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.657 qpair failed and we were unable to recover it. 00:25:57.657 [2024-12-11 15:02:40.247751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.657 [2024-12-11 15:02:40.247777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.657 qpair failed and we were unable to recover it. 00:25:57.657 [2024-12-11 15:02:40.247895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.657 [2024-12-11 15:02:40.247921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.657 qpair failed and we were unable to recover it. 00:25:57.657 [2024-12-11 15:02:40.248062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.657 [2024-12-11 15:02:40.248087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.657 qpair failed and we were unable to recover it. 00:25:57.657 [2024-12-11 15:02:40.248176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.657 [2024-12-11 15:02:40.248202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.657 qpair failed and we were unable to recover it. 00:25:57.657 [2024-12-11 15:02:40.248318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.657 [2024-12-11 15:02:40.248344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.657 qpair failed and we were unable to recover it. 00:25:57.657 [2024-12-11 15:02:40.248461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.657 [2024-12-11 15:02:40.248487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.657 qpair failed and we were unable to recover it. 00:25:57.657 [2024-12-11 15:02:40.248586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.657 [2024-12-11 15:02:40.248613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.657 qpair failed and we were unable to recover it. 00:25:57.657 [2024-12-11 15:02:40.248719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.657 [2024-12-11 15:02:40.248745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.657 qpair failed and we were unable to recover it. 00:25:57.657 [2024-12-11 15:02:40.248830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.657 [2024-12-11 15:02:40.248855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.657 qpair failed and we were unable to recover it. 00:25:57.657 [2024-12-11 15:02:40.248963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.657 [2024-12-11 15:02:40.248988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.657 qpair failed and we were unable to recover it. 00:25:57.657 [2024-12-11 15:02:40.249108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.657 [2024-12-11 15:02:40.249154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.657 qpair failed and we were unable to recover it. 00:25:57.658 [2024-12-11 15:02:40.249270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.658 [2024-12-11 15:02:40.249295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.658 qpair failed and we were unable to recover it. 00:25:57.658 [2024-12-11 15:02:40.249410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.658 [2024-12-11 15:02:40.249435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.658 qpair failed and we were unable to recover it. 00:25:57.658 [2024-12-11 15:02:40.249515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.658 [2024-12-11 15:02:40.249540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.658 qpair failed and we were unable to recover it. 00:25:57.658 [2024-12-11 15:02:40.249665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.658 [2024-12-11 15:02:40.249690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.658 qpair failed and we were unable to recover it. 00:25:57.658 [2024-12-11 15:02:40.249767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.658 [2024-12-11 15:02:40.249793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.658 qpair failed and we were unable to recover it. 00:25:57.658 [2024-12-11 15:02:40.249877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.658 [2024-12-11 15:02:40.249903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.658 qpair failed and we were unable to recover it. 00:25:57.658 [2024-12-11 15:02:40.250044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.658 [2024-12-11 15:02:40.250069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.658 qpair failed and we were unable to recover it. 00:25:57.658 [2024-12-11 15:02:40.250175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.658 [2024-12-11 15:02:40.250200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.658 qpair failed and we were unable to recover it. 00:25:57.658 [2024-12-11 15:02:40.250336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.658 [2024-12-11 15:02:40.250361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.658 qpair failed and we were unable to recover it. 00:25:57.658 [2024-12-11 15:02:40.250442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.658 [2024-12-11 15:02:40.250467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.658 qpair failed and we were unable to recover it. 00:25:57.658 [2024-12-11 15:02:40.250554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.658 [2024-12-11 15:02:40.250583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.658 qpair failed and we were unable to recover it. 00:25:57.658 [2024-12-11 15:02:40.250672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.658 [2024-12-11 15:02:40.250698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.658 qpair failed and we were unable to recover it. 00:25:57.658 [2024-12-11 15:02:40.250806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.658 [2024-12-11 15:02:40.250833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.658 qpair failed and we were unable to recover it. 00:25:57.658 [2024-12-11 15:02:40.250944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.658 [2024-12-11 15:02:40.250971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.658 qpair failed and we were unable to recover it. 00:25:57.658 [2024-12-11 15:02:40.251053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.658 [2024-12-11 15:02:40.251078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.658 qpair failed and we were unable to recover it. 00:25:57.658 [2024-12-11 15:02:40.251191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.658 [2024-12-11 15:02:40.251217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.658 qpair failed and we were unable to recover it. 00:25:57.658 [2024-12-11 15:02:40.251321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.658 [2024-12-11 15:02:40.251346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.658 qpair failed and we were unable to recover it. 00:25:57.658 [2024-12-11 15:02:40.251421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.658 [2024-12-11 15:02:40.251447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.658 qpair failed and we were unable to recover it. 00:25:57.658 [2024-12-11 15:02:40.251569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.658 [2024-12-11 15:02:40.251595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.658 qpair failed and we were unable to recover it. 00:25:57.658 [2024-12-11 15:02:40.251690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.658 [2024-12-11 15:02:40.251718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.658 qpair failed and we were unable to recover it. 00:25:57.658 [2024-12-11 15:02:40.251803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.658 [2024-12-11 15:02:40.251829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.658 qpair failed and we were unable to recover it. 00:25:57.658 [2024-12-11 15:02:40.251919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.658 [2024-12-11 15:02:40.251944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.658 qpair failed and we were unable to recover it. 00:25:57.658 [2024-12-11 15:02:40.252055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.658 [2024-12-11 15:02:40.252081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.658 qpair failed and we were unable to recover it. 00:25:57.658 [2024-12-11 15:02:40.252218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.658 [2024-12-11 15:02:40.252245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.658 qpair failed and we were unable to recover it. 00:25:57.658 [2024-12-11 15:02:40.252356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.658 [2024-12-11 15:02:40.252382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.658 qpair failed and we were unable to recover it. 00:25:57.658 [2024-12-11 15:02:40.252522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.658 [2024-12-11 15:02:40.252555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.658 qpair failed and we were unable to recover it. 00:25:57.658 [2024-12-11 15:02:40.252673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.658 [2024-12-11 15:02:40.252698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.658 qpair failed and we were unable to recover it. 00:25:57.658 [2024-12-11 15:02:40.252855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.658 [2024-12-11 15:02:40.252881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.658 qpair failed and we were unable to recover it. 00:25:57.658 [2024-12-11 15:02:40.252995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.658 [2024-12-11 15:02:40.253021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.658 qpair failed and we were unable to recover it. 00:25:57.658 [2024-12-11 15:02:40.253131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.658 [2024-12-11 15:02:40.253157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.658 qpair failed and we were unable to recover it. 00:25:57.658 [2024-12-11 15:02:40.253269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.658 [2024-12-11 15:02:40.253295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.658 qpair failed and we were unable to recover it. 00:25:57.658 [2024-12-11 15:02:40.253408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.658 [2024-12-11 15:02:40.253433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.658 qpair failed and we were unable to recover it. 00:25:57.658 [2024-12-11 15:02:40.253515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.658 [2024-12-11 15:02:40.253541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.658 qpair failed and we were unable to recover it. 00:25:57.658 [2024-12-11 15:02:40.253660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.658 [2024-12-11 15:02:40.253686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.658 qpair failed and we were unable to recover it. 00:25:57.658 [2024-12-11 15:02:40.253827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.658 [2024-12-11 15:02:40.253853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.659 qpair failed and we were unable to recover it. 00:25:57.659 [2024-12-11 15:02:40.253930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.659 [2024-12-11 15:02:40.253956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.659 qpair failed and we were unable to recover it. 00:25:57.659 [2024-12-11 15:02:40.254104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.659 [2024-12-11 15:02:40.254129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.659 qpair failed and we were unable to recover it. 00:25:57.659 [2024-12-11 15:02:40.254239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.659 [2024-12-11 15:02:40.254265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.659 qpair failed and we were unable to recover it. 00:25:57.659 [2024-12-11 15:02:40.254389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.659 [2024-12-11 15:02:40.254415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.659 qpair failed and we were unable to recover it. 00:25:57.659 [2024-12-11 15:02:40.254566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.659 [2024-12-11 15:02:40.254592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.659 qpair failed and we were unable to recover it. 00:25:57.659 [2024-12-11 15:02:40.254715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.659 [2024-12-11 15:02:40.254741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.659 qpair failed and we were unable to recover it. 00:25:57.659 [2024-12-11 15:02:40.254855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.659 [2024-12-11 15:02:40.254881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.659 qpair failed and we were unable to recover it. 00:25:57.659 [2024-12-11 15:02:40.255027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.659 [2024-12-11 15:02:40.255053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.659 qpair failed and we were unable to recover it. 00:25:57.659 [2024-12-11 15:02:40.255163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.659 [2024-12-11 15:02:40.255189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.659 qpair failed and we were unable to recover it. 00:25:57.659 [2024-12-11 15:02:40.255304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.659 [2024-12-11 15:02:40.255330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.659 qpair failed and we were unable to recover it. 00:25:57.659 [2024-12-11 15:02:40.255443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.659 [2024-12-11 15:02:40.255468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.659 qpair failed and we were unable to recover it. 00:25:57.659 [2024-12-11 15:02:40.255610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.659 [2024-12-11 15:02:40.255637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.659 qpair failed and we were unable to recover it. 00:25:57.659 [2024-12-11 15:02:40.255721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.659 [2024-12-11 15:02:40.255748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.659 qpair failed and we were unable to recover it. 00:25:57.659 [2024-12-11 15:02:40.255886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.659 [2024-12-11 15:02:40.255931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.659 qpair failed and we were unable to recover it. 00:25:57.659 [2024-12-11 15:02:40.256018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.659 [2024-12-11 15:02:40.256045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.659 qpair failed and we were unable to recover it. 00:25:57.659 [2024-12-11 15:02:40.256184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.659 [2024-12-11 15:02:40.256210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.659 qpair failed and we were unable to recover it. 00:25:57.659 [2024-12-11 15:02:40.256320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.659 [2024-12-11 15:02:40.256346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.659 qpair failed and we were unable to recover it. 00:25:57.659 [2024-12-11 15:02:40.256484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.659 [2024-12-11 15:02:40.256522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.659 qpair failed and we were unable to recover it. 00:25:57.659 [2024-12-11 15:02:40.256634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.659 [2024-12-11 15:02:40.256668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.659 qpair failed and we were unable to recover it. 00:25:57.659 [2024-12-11 15:02:40.256784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.659 [2024-12-11 15:02:40.256811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.659 qpair failed and we were unable to recover it. 00:25:57.659 [2024-12-11 15:02:40.256936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.659 [2024-12-11 15:02:40.256970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.659 qpair failed and we were unable to recover it. 00:25:57.659 [2024-12-11 15:02:40.257120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.659 [2024-12-11 15:02:40.257175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.659 qpair failed and we were unable to recover it. 00:25:57.659 [2024-12-11 15:02:40.257258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.659 [2024-12-11 15:02:40.257285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.659 qpair failed and we were unable to recover it. 00:25:57.659 [2024-12-11 15:02:40.257394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.659 [2024-12-11 15:02:40.257419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.659 qpair failed and we were unable to recover it. 00:25:57.659 [2024-12-11 15:02:40.257523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.659 [2024-12-11 15:02:40.257557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.659 qpair failed and we were unable to recover it. 00:25:57.659 [2024-12-11 15:02:40.257644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.659 [2024-12-11 15:02:40.257670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.659 qpair failed and we were unable to recover it. 00:25:57.659 [2024-12-11 15:02:40.257779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.659 [2024-12-11 15:02:40.257804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.659 qpair failed and we were unable to recover it. 00:25:57.659 [2024-12-11 15:02:40.257912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.659 [2024-12-11 15:02:40.257937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.659 qpair failed and we were unable to recover it. 00:25:57.659 [2024-12-11 15:02:40.258053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.659 [2024-12-11 15:02:40.258079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.659 qpair failed and we were unable to recover it. 00:25:57.659 [2024-12-11 15:02:40.258197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.659 [2024-12-11 15:02:40.258224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.659 qpair failed and we were unable to recover it. 00:25:57.659 [2024-12-11 15:02:40.258363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.659 [2024-12-11 15:02:40.258389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.659 qpair failed and we were unable to recover it. 00:25:57.659 [2024-12-11 15:02:40.258502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.659 [2024-12-11 15:02:40.258529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.659 qpair failed and we were unable to recover it. 00:25:57.659 [2024-12-11 15:02:40.258674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.659 [2024-12-11 15:02:40.258700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.659 qpair failed and we were unable to recover it. 00:25:57.659 [2024-12-11 15:02:40.258814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.659 [2024-12-11 15:02:40.258840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.659 qpair failed and we were unable to recover it. 00:25:57.659 [2024-12-11 15:02:40.258927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.659 [2024-12-11 15:02:40.258954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.659 qpair failed and we were unable to recover it. 00:25:57.659 [2024-12-11 15:02:40.259101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.659 [2024-12-11 15:02:40.259127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.659 qpair failed and we were unable to recover it. 00:25:57.659 [2024-12-11 15:02:40.259274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.660 [2024-12-11 15:02:40.259322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.660 qpair failed and we were unable to recover it. 00:25:57.660 [2024-12-11 15:02:40.259434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.660 [2024-12-11 15:02:40.259460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.660 qpair failed and we were unable to recover it. 00:25:57.660 [2024-12-11 15:02:40.259591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.660 [2024-12-11 15:02:40.259627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.660 qpair failed and we were unable to recover it. 00:25:57.660 [2024-12-11 15:02:40.259750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.660 [2024-12-11 15:02:40.259804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.660 qpair failed and we were unable to recover it. 00:25:57.660 [2024-12-11 15:02:40.259933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.660 [2024-12-11 15:02:40.259979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.660 qpair failed and we were unable to recover it. 00:25:57.660 [2024-12-11 15:02:40.260155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.660 [2024-12-11 15:02:40.260201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.660 qpair failed and we were unable to recover it. 00:25:57.660 [2024-12-11 15:02:40.260319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.660 [2024-12-11 15:02:40.260344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.660 qpair failed and we were unable to recover it. 00:25:57.660 [2024-12-11 15:02:40.260452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.660 [2024-12-11 15:02:40.260477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.660 qpair failed and we were unable to recover it. 00:25:57.660 [2024-12-11 15:02:40.260587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.660 [2024-12-11 15:02:40.260613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.660 qpair failed and we were unable to recover it. 00:25:57.660 [2024-12-11 15:02:40.260690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.660 [2024-12-11 15:02:40.260720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.660 qpair failed and we were unable to recover it. 00:25:57.660 [2024-12-11 15:02:40.260796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.660 [2024-12-11 15:02:40.260822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.660 qpair failed and we were unable to recover it. 00:25:57.660 [2024-12-11 15:02:40.260913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.660 [2024-12-11 15:02:40.260939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.660 qpair failed and we were unable to recover it. 00:25:57.660 [2024-12-11 15:02:40.261022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.660 [2024-12-11 15:02:40.261047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.660 qpair failed and we were unable to recover it. 00:25:57.660 [2024-12-11 15:02:40.261161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.660 [2024-12-11 15:02:40.261187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.660 qpair failed and we were unable to recover it. 00:25:57.660 [2024-12-11 15:02:40.261320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.660 [2024-12-11 15:02:40.261360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.660 qpair failed and we were unable to recover it. 00:25:57.660 [2024-12-11 15:02:40.261482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.660 [2024-12-11 15:02:40.261510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.660 qpair failed and we were unable to recover it. 00:25:57.660 [2024-12-11 15:02:40.261668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.660 [2024-12-11 15:02:40.261696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.660 qpair failed and we were unable to recover it. 00:25:57.660 [2024-12-11 15:02:40.261778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.660 [2024-12-11 15:02:40.261804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.660 qpair failed and we were unable to recover it. 00:25:57.660 [2024-12-11 15:02:40.261917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.660 [2024-12-11 15:02:40.261943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.660 qpair failed and we were unable to recover it. 00:25:57.660 [2024-12-11 15:02:40.262091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.660 [2024-12-11 15:02:40.262117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.660 qpair failed and we were unable to recover it. 00:25:57.660 [2024-12-11 15:02:40.262205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.660 [2024-12-11 15:02:40.262232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.660 qpair failed and we were unable to recover it. 00:25:57.660 [2024-12-11 15:02:40.262307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.660 [2024-12-11 15:02:40.262332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.660 qpair failed and we were unable to recover it. 00:25:57.660 [2024-12-11 15:02:40.262424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.660 [2024-12-11 15:02:40.262449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.660 qpair failed and we were unable to recover it. 00:25:57.660 [2024-12-11 15:02:40.262560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.660 [2024-12-11 15:02:40.262586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.660 qpair failed and we were unable to recover it. 00:25:57.660 [2024-12-11 15:02:40.262696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.660 [2024-12-11 15:02:40.262721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.660 qpair failed and we were unable to recover it. 00:25:57.660 [2024-12-11 15:02:40.262833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.660 [2024-12-11 15:02:40.262858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.660 qpair failed and we were unable to recover it. 00:25:57.660 [2024-12-11 15:02:40.263000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.660 [2024-12-11 15:02:40.263025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.660 qpair failed and we were unable to recover it. 00:25:57.660 [2024-12-11 15:02:40.263170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.660 [2024-12-11 15:02:40.263195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.660 qpair failed and we were unable to recover it. 00:25:57.660 [2024-12-11 15:02:40.263308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.660 [2024-12-11 15:02:40.263333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.660 qpair failed and we were unable to recover it. 00:25:57.660 [2024-12-11 15:02:40.263478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.660 [2024-12-11 15:02:40.263506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.660 qpair failed and we were unable to recover it. 00:25:57.660 [2024-12-11 15:02:40.263643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.660 [2024-12-11 15:02:40.263670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.660 qpair failed and we were unable to recover it. 00:25:57.660 [2024-12-11 15:02:40.263816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.660 [2024-12-11 15:02:40.263842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.660 qpair failed and we were unable to recover it. 00:25:57.660 [2024-12-11 15:02:40.263964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.660 [2024-12-11 15:02:40.263991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.660 qpair failed and we were unable to recover it. 00:25:57.660 [2024-12-11 15:02:40.264128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.660 [2024-12-11 15:02:40.264154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.660 qpair failed and we were unable to recover it. 00:25:57.660 [2024-12-11 15:02:40.264249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.660 [2024-12-11 15:02:40.264276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.660 qpair failed and we were unable to recover it. 00:25:57.660 [2024-12-11 15:02:40.264428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.660 [2024-12-11 15:02:40.264454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.660 qpair failed and we were unable to recover it. 00:25:57.660 [2024-12-11 15:02:40.264569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.660 [2024-12-11 15:02:40.264599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.660 qpair failed and we were unable to recover it. 00:25:57.660 [2024-12-11 15:02:40.264714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.660 [2024-12-11 15:02:40.264739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.660 qpair failed and we were unable to recover it. 00:25:57.660 [2024-12-11 15:02:40.264873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.660 [2024-12-11 15:02:40.264908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.660 qpair failed and we were unable to recover it. 00:25:57.660 [2024-12-11 15:02:40.265056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.661 [2024-12-11 15:02:40.265102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.661 qpair failed and we were unable to recover it. 00:25:57.661 [2024-12-11 15:02:40.265235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.661 [2024-12-11 15:02:40.265284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.661 qpair failed and we were unable to recover it. 00:25:57.661 [2024-12-11 15:02:40.265425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.661 [2024-12-11 15:02:40.265452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.661 qpair failed and we were unable to recover it. 00:25:57.661 [2024-12-11 15:02:40.265588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.661 [2024-12-11 15:02:40.265627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.661 qpair failed and we were unable to recover it. 00:25:57.661 [2024-12-11 15:02:40.265726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.661 [2024-12-11 15:02:40.265754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.661 qpair failed and we were unable to recover it. 00:25:57.661 [2024-12-11 15:02:40.265897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.661 [2024-12-11 15:02:40.265923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.661 qpair failed and we were unable to recover it. 00:25:57.661 [2024-12-11 15:02:40.266069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.661 [2024-12-11 15:02:40.266095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.661 qpair failed and we were unable to recover it. 00:25:57.661 [2024-12-11 15:02:40.266217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.661 [2024-12-11 15:02:40.266244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.661 qpair failed and we were unable to recover it. 00:25:57.661 [2024-12-11 15:02:40.266436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.661 [2024-12-11 15:02:40.266462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.661 qpair failed and we were unable to recover it. 00:25:57.661 [2024-12-11 15:02:40.266568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.661 [2024-12-11 15:02:40.266595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.661 qpair failed and we were unable to recover it. 00:25:57.661 [2024-12-11 15:02:40.266688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.661 [2024-12-11 15:02:40.266715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.661 qpair failed and we were unable to recover it. 00:25:57.661 [2024-12-11 15:02:40.266832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.661 [2024-12-11 15:02:40.266858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.661 qpair failed and we were unable to recover it. 00:25:57.661 [2024-12-11 15:02:40.266998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.661 [2024-12-11 15:02:40.267024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.661 qpair failed and we were unable to recover it. 00:25:57.661 [2024-12-11 15:02:40.267147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.661 [2024-12-11 15:02:40.267173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.661 qpair failed and we were unable to recover it. 00:25:57.661 [2024-12-11 15:02:40.267339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.661 [2024-12-11 15:02:40.267374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.661 qpair failed and we were unable to recover it. 00:25:57.661 [2024-12-11 15:02:40.267498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.661 [2024-12-11 15:02:40.267537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.661 qpair failed and we were unable to recover it. 00:25:57.661 [2024-12-11 15:02:40.267664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.661 [2024-12-11 15:02:40.267692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.661 qpair failed and we were unable to recover it. 00:25:57.661 [2024-12-11 15:02:40.267804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.661 [2024-12-11 15:02:40.267831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.661 qpair failed and we were unable to recover it. 00:25:57.661 [2024-12-11 15:02:40.267919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.661 [2024-12-11 15:02:40.267943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.661 qpair failed and we were unable to recover it. 00:25:57.661 [2024-12-11 15:02:40.268057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.661 [2024-12-11 15:02:40.268083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.661 qpair failed and we were unable to recover it. 00:25:57.661 [2024-12-11 15:02:40.268211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.661 [2024-12-11 15:02:40.268259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.661 qpair failed and we were unable to recover it. 00:25:57.661 [2024-12-11 15:02:40.268378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.661 [2024-12-11 15:02:40.268403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.661 qpair failed and we were unable to recover it. 00:25:57.661 [2024-12-11 15:02:40.268490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.661 [2024-12-11 15:02:40.268515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.661 qpair failed and we were unable to recover it. 00:25:57.661 [2024-12-11 15:02:40.268640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.661 [2024-12-11 15:02:40.268668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.661 qpair failed and we were unable to recover it. 00:25:57.661 [2024-12-11 15:02:40.268791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.661 [2024-12-11 15:02:40.268820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.661 qpair failed and we were unable to recover it. 00:25:57.661 [2024-12-11 15:02:40.268938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.661 [2024-12-11 15:02:40.268965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.661 qpair failed and we were unable to recover it. 00:25:57.661 [2024-12-11 15:02:40.269077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.661 [2024-12-11 15:02:40.269103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.661 qpair failed and we were unable to recover it. 00:25:57.661 [2024-12-11 15:02:40.269241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.661 [2024-12-11 15:02:40.269266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.661 qpair failed and we were unable to recover it. 00:25:57.661 [2024-12-11 15:02:40.269383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.661 [2024-12-11 15:02:40.269409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.661 qpair failed and we were unable to recover it. 00:25:57.661 [2024-12-11 15:02:40.269525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.661 [2024-12-11 15:02:40.269562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.661 qpair failed and we were unable to recover it. 00:25:57.661 [2024-12-11 15:02:40.269657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.661 [2024-12-11 15:02:40.269684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.661 qpair failed and we were unable to recover it. 00:25:57.661 [2024-12-11 15:02:40.269833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.661 [2024-12-11 15:02:40.269864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.661 qpair failed and we were unable to recover it. 00:25:57.661 [2024-12-11 15:02:40.270030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.661 [2024-12-11 15:02:40.270065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.661 qpair failed and we were unable to recover it. 00:25:57.661 [2024-12-11 15:02:40.270201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.661 [2024-12-11 15:02:40.270236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.661 qpair failed and we were unable to recover it. 00:25:57.661 [2024-12-11 15:02:40.270374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.661 [2024-12-11 15:02:40.270409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.661 qpair failed and we were unable to recover it. 00:25:57.661 [2024-12-11 15:02:40.270579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.661 [2024-12-11 15:02:40.270606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.661 qpair failed and we were unable to recover it. 00:25:57.661 [2024-12-11 15:02:40.270723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.661 [2024-12-11 15:02:40.270749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.661 qpair failed and we were unable to recover it. 00:25:57.661 [2024-12-11 15:02:40.270861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.661 [2024-12-11 15:02:40.270894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.661 qpair failed and we were unable to recover it. 00:25:57.662 [2024-12-11 15:02:40.270998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.662 [2024-12-11 15:02:40.271025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.662 qpair failed and we were unable to recover it. 00:25:57.662 [2024-12-11 15:02:40.271164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.662 [2024-12-11 15:02:40.271199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.662 qpair failed and we were unable to recover it. 00:25:57.662 [2024-12-11 15:02:40.271356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.662 [2024-12-11 15:02:40.271403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.662 qpair failed and we were unable to recover it. 00:25:57.662 [2024-12-11 15:02:40.271521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.662 [2024-12-11 15:02:40.271554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.662 qpair failed and we were unable to recover it. 00:25:57.662 [2024-12-11 15:02:40.271672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.662 [2024-12-11 15:02:40.271698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.662 qpair failed and we were unable to recover it. 00:25:57.662 [2024-12-11 15:02:40.271834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.662 [2024-12-11 15:02:40.271882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.662 qpair failed and we were unable to recover it. 00:25:57.662 [2024-12-11 15:02:40.271999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.662 [2024-12-11 15:02:40.272032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.662 qpair failed and we were unable to recover it. 00:25:57.662 [2024-12-11 15:02:40.272156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.662 [2024-12-11 15:02:40.272182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.662 qpair failed and we were unable to recover it. 00:25:57.662 [2024-12-11 15:02:40.272287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.662 [2024-12-11 15:02:40.272313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.662 qpair failed and we were unable to recover it. 00:25:57.662 [2024-12-11 15:02:40.272460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.662 [2024-12-11 15:02:40.272485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.662 qpair failed and we were unable to recover it. 00:25:57.662 [2024-12-11 15:02:40.272619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.662 [2024-12-11 15:02:40.272659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.662 qpair failed and we were unable to recover it. 00:25:57.662 [2024-12-11 15:02:40.272787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.662 [2024-12-11 15:02:40.272815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.662 qpair failed and we were unable to recover it. 00:25:57.662 [2024-12-11 15:02:40.272958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.662 [2024-12-11 15:02:40.273006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.662 qpair failed and we were unable to recover it. 00:25:57.662 [2024-12-11 15:02:40.273186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.662 [2024-12-11 15:02:40.273232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.662 qpair failed and we were unable to recover it. 00:25:57.662 [2024-12-11 15:02:40.273485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.662 [2024-12-11 15:02:40.273511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.662 qpair failed and we were unable to recover it. 00:25:57.662 [2024-12-11 15:02:40.273624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.662 [2024-12-11 15:02:40.273650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.662 qpair failed and we were unable to recover it. 00:25:57.662 [2024-12-11 15:02:40.273765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.662 [2024-12-11 15:02:40.273791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.662 qpair failed and we were unable to recover it. 00:25:57.662 [2024-12-11 15:02:40.273935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.662 [2024-12-11 15:02:40.273985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.662 qpair failed and we were unable to recover it. 00:25:57.662 [2024-12-11 15:02:40.274160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.662 [2024-12-11 15:02:40.274203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.662 qpair failed and we were unable to recover it. 00:25:57.662 [2024-12-11 15:02:40.274316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.662 [2024-12-11 15:02:40.274363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.662 qpair failed and we were unable to recover it. 00:25:57.662 [2024-12-11 15:02:40.274478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.662 [2024-12-11 15:02:40.274503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.662 qpair failed and we were unable to recover it. 00:25:57.662 [2024-12-11 15:02:40.274626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.662 [2024-12-11 15:02:40.274652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.662 qpair failed and we were unable to recover it. 00:25:57.662 [2024-12-11 15:02:40.274744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.662 [2024-12-11 15:02:40.274770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.662 qpair failed and we were unable to recover it. 00:25:57.662 [2024-12-11 15:02:40.274857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.662 [2024-12-11 15:02:40.274882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.662 qpair failed and we were unable to recover it. 00:25:57.662 [2024-12-11 15:02:40.274993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.662 [2024-12-11 15:02:40.275018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.662 qpair failed and we were unable to recover it. 00:25:57.662 [2024-12-11 15:02:40.275158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.662 [2024-12-11 15:02:40.275183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.662 qpair failed and we were unable to recover it. 00:25:57.662 [2024-12-11 15:02:40.275273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.662 [2024-12-11 15:02:40.275302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.662 qpair failed and we were unable to recover it. 00:25:57.662 [2024-12-11 15:02:40.275419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.662 [2024-12-11 15:02:40.275445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.662 qpair failed and we were unable to recover it. 00:25:57.662 [2024-12-11 15:02:40.275558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.662 [2024-12-11 15:02:40.275585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.662 qpair failed and we were unable to recover it. 00:25:57.662 [2024-12-11 15:02:40.275700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.662 [2024-12-11 15:02:40.275725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.662 qpair failed and we were unable to recover it. 00:25:57.662 [2024-12-11 15:02:40.275846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.662 [2024-12-11 15:02:40.275886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.662 qpair failed and we were unable to recover it. 00:25:57.662 [2024-12-11 15:02:40.275970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.662 [2024-12-11 15:02:40.275998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.662 qpair failed and we were unable to recover it. 00:25:57.662 [2024-12-11 15:02:40.276094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.662 [2024-12-11 15:02:40.276122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.662 qpair failed and we were unable to recover it. 00:25:57.662 [2024-12-11 15:02:40.276238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.662 [2024-12-11 15:02:40.276264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.662 qpair failed and we were unable to recover it. 00:25:57.662 [2024-12-11 15:02:40.276359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.662 [2024-12-11 15:02:40.276386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.662 qpair failed and we were unable to recover it. 00:25:57.662 [2024-12-11 15:02:40.276492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.662 [2024-12-11 15:02:40.276518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.662 qpair failed and we were unable to recover it. 00:25:57.662 [2024-12-11 15:02:40.276639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.662 [2024-12-11 15:02:40.276665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.662 qpair failed and we were unable to recover it. 00:25:57.662 [2024-12-11 15:02:40.276776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.662 [2024-12-11 15:02:40.276802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.662 qpair failed and we were unable to recover it. 00:25:57.662 [2024-12-11 15:02:40.276909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.663 [2024-12-11 15:02:40.276934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.663 qpair failed and we were unable to recover it. 00:25:57.663 [2024-12-11 15:02:40.277056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.663 [2024-12-11 15:02:40.277081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.663 qpair failed and we were unable to recover it. 00:25:57.663 [2024-12-11 15:02:40.277201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.663 [2024-12-11 15:02:40.277227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.663 qpair failed and we were unable to recover it. 00:25:57.663 [2024-12-11 15:02:40.277313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.663 [2024-12-11 15:02:40.277339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.663 qpair failed and we were unable to recover it. 00:25:57.663 [2024-12-11 15:02:40.277479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.663 [2024-12-11 15:02:40.277504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.663 qpair failed and we were unable to recover it. 00:25:57.663 [2024-12-11 15:02:40.277607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.663 [2024-12-11 15:02:40.277635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.663 qpair failed and we were unable to recover it. 00:25:57.663 [2024-12-11 15:02:40.277722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.663 [2024-12-11 15:02:40.277749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.663 qpair failed and we were unable to recover it. 00:25:57.663 [2024-12-11 15:02:40.277894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.663 [2024-12-11 15:02:40.277920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.663 qpair failed and we were unable to recover it. 00:25:57.663 [2024-12-11 15:02:40.278036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.663 [2024-12-11 15:02:40.278063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.663 qpair failed and we were unable to recover it. 00:25:57.663 [2024-12-11 15:02:40.278182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.663 [2024-12-11 15:02:40.278208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.663 qpair failed and we were unable to recover it. 00:25:57.663 [2024-12-11 15:02:40.278290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.663 [2024-12-11 15:02:40.278317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.663 qpair failed and we were unable to recover it. 00:25:57.663 [2024-12-11 15:02:40.278448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.663 [2024-12-11 15:02:40.278488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.663 qpair failed and we were unable to recover it. 00:25:57.663 [2024-12-11 15:02:40.278661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.663 [2024-12-11 15:02:40.278691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.663 qpair failed and we were unable to recover it. 00:25:57.663 [2024-12-11 15:02:40.278809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.663 [2024-12-11 15:02:40.278836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.663 qpair failed and we were unable to recover it. 00:25:57.663 [2024-12-11 15:02:40.278977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.663 [2024-12-11 15:02:40.279004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.663 qpair failed and we were unable to recover it. 00:25:57.663 [2024-12-11 15:02:40.279125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.663 [2024-12-11 15:02:40.279152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.663 qpair failed and we were unable to recover it. 00:25:57.663 [2024-12-11 15:02:40.279295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.663 [2024-12-11 15:02:40.279330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.663 qpair failed and we were unable to recover it. 00:25:57.663 [2024-12-11 15:02:40.279478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.663 [2024-12-11 15:02:40.279504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.663 qpair failed and we were unable to recover it. 00:25:57.663 [2024-12-11 15:02:40.279633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.663 [2024-12-11 15:02:40.279662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.663 qpair failed and we were unable to recover it. 00:25:57.663 [2024-12-11 15:02:40.279787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.663 [2024-12-11 15:02:40.279813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.663 qpair failed and we were unable to recover it. 00:25:57.663 [2024-12-11 15:02:40.279929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.663 [2024-12-11 15:02:40.279956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.663 qpair failed and we were unable to recover it. 00:25:57.663 [2024-12-11 15:02:40.280046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.663 [2024-12-11 15:02:40.280071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.663 qpair failed and we were unable to recover it. 00:25:57.663 [2024-12-11 15:02:40.280229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.663 [2024-12-11 15:02:40.280288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.663 qpair failed and we were unable to recover it. 00:25:57.663 [2024-12-11 15:02:40.280382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.663 [2024-12-11 15:02:40.280411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.663 qpair failed and we were unable to recover it. 00:25:57.663 [2024-12-11 15:02:40.280571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.663 [2024-12-11 15:02:40.280599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.663 qpair failed and we were unable to recover it. 00:25:57.663 [2024-12-11 15:02:40.280717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.663 [2024-12-11 15:02:40.280743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.663 qpair failed and we were unable to recover it. 00:25:57.663 [2024-12-11 15:02:40.280858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.663 [2024-12-11 15:02:40.280884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.663 qpair failed and we were unable to recover it. 00:25:57.663 [2024-12-11 15:02:40.280959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.663 [2024-12-11 15:02:40.280985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.663 qpair failed and we were unable to recover it. 00:25:57.663 [2024-12-11 15:02:40.281121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.663 [2024-12-11 15:02:40.281167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.663 qpair failed and we were unable to recover it. 00:25:57.663 [2024-12-11 15:02:40.281288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.663 [2024-12-11 15:02:40.281315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.663 qpair failed and we were unable to recover it. 00:25:57.663 [2024-12-11 15:02:40.281425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.663 [2024-12-11 15:02:40.281451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.663 qpair failed and we were unable to recover it. 00:25:57.663 [2024-12-11 15:02:40.281568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.663 [2024-12-11 15:02:40.281598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.663 qpair failed and we were unable to recover it. 00:25:57.663 [2024-12-11 15:02:40.281688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.663 [2024-12-11 15:02:40.281716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.663 qpair failed and we were unable to recover it. 00:25:57.663 [2024-12-11 15:02:40.281852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.663 [2024-12-11 15:02:40.281879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.663 qpair failed and we were unable to recover it. 00:25:57.663 [2024-12-11 15:02:40.281996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.663 [2024-12-11 15:02:40.282023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.663 qpair failed and we were unable to recover it. 00:25:57.663 [2024-12-11 15:02:40.282130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.663 [2024-12-11 15:02:40.282156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.663 qpair failed and we were unable to recover it. 00:25:57.663 [2024-12-11 15:02:40.282270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.663 [2024-12-11 15:02:40.282296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.663 qpair failed and we were unable to recover it. 00:25:57.664 [2024-12-11 15:02:40.282414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.664 [2024-12-11 15:02:40.282442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.664 qpair failed and we were unable to recover it. 00:25:57.664 [2024-12-11 15:02:40.282580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.664 [2024-12-11 15:02:40.282618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.664 qpair failed and we were unable to recover it. 00:25:57.664 [2024-12-11 15:02:40.282745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.664 [2024-12-11 15:02:40.282773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.664 qpair failed and we were unable to recover it. 00:25:57.664 [2024-12-11 15:02:40.282960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.664 [2024-12-11 15:02:40.283007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.664 qpair failed and we were unable to recover it. 00:25:57.664 [2024-12-11 15:02:40.283119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.664 [2024-12-11 15:02:40.283166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.664 qpair failed and we were unable to recover it. 00:25:57.664 [2024-12-11 15:02:40.283304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.664 [2024-12-11 15:02:40.283354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.664 qpair failed and we were unable to recover it. 00:25:57.664 [2024-12-11 15:02:40.283467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.664 [2024-12-11 15:02:40.283492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.664 qpair failed and we were unable to recover it. 00:25:57.664 [2024-12-11 15:02:40.283642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.664 [2024-12-11 15:02:40.283670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.664 qpair failed and we were unable to recover it. 00:25:57.664 [2024-12-11 15:02:40.283818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.664 [2024-12-11 15:02:40.283845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.664 qpair failed and we were unable to recover it. 00:25:57.664 [2024-12-11 15:02:40.283988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.664 [2024-12-11 15:02:40.284014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.664 qpair failed and we were unable to recover it. 00:25:57.664 [2024-12-11 15:02:40.284096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.664 [2024-12-11 15:02:40.284122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.664 qpair failed and we were unable to recover it. 00:25:57.664 [2024-12-11 15:02:40.284234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.664 [2024-12-11 15:02:40.284260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.664 qpair failed and we were unable to recover it. 00:25:57.664 [2024-12-11 15:02:40.284351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.664 [2024-12-11 15:02:40.284380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.664 qpair failed and we were unable to recover it. 00:25:57.664 [2024-12-11 15:02:40.284466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.664 [2024-12-11 15:02:40.284492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.664 qpair failed and we were unable to recover it. 00:25:57.664 [2024-12-11 15:02:40.284578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.664 [2024-12-11 15:02:40.284604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.664 qpair failed and we were unable to recover it. 00:25:57.664 [2024-12-11 15:02:40.284715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.664 [2024-12-11 15:02:40.284740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.664 qpair failed and we were unable to recover it. 00:25:57.664 [2024-12-11 15:02:40.284859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.664 [2024-12-11 15:02:40.284885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.664 qpair failed and we were unable to recover it. 00:25:57.664 [2024-12-11 15:02:40.284960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.664 [2024-12-11 15:02:40.284995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.664 qpair failed and we were unable to recover it. 00:25:57.664 [2024-12-11 15:02:40.285117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.664 [2024-12-11 15:02:40.285149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.664 qpair failed and we were unable to recover it. 00:25:57.664 [2024-12-11 15:02:40.285263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.664 [2024-12-11 15:02:40.285298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.664 qpair failed and we were unable to recover it. 00:25:57.664 [2024-12-11 15:02:40.285404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.664 [2024-12-11 15:02:40.285440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.664 qpair failed and we were unable to recover it. 00:25:57.664 [2024-12-11 15:02:40.285625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.664 [2024-12-11 15:02:40.285652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.664 qpair failed and we were unable to recover it. 00:25:57.664 [2024-12-11 15:02:40.285776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.664 [2024-12-11 15:02:40.285804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.664 qpair failed and we were unable to recover it. 00:25:57.664 [2024-12-11 15:02:40.285944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.664 [2024-12-11 15:02:40.285991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.664 qpair failed and we were unable to recover it. 00:25:57.664 [2024-12-11 15:02:40.286120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.664 [2024-12-11 15:02:40.286169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.664 qpair failed and we were unable to recover it. 00:25:57.664 [2024-12-11 15:02:40.286339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.664 [2024-12-11 15:02:40.286387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.664 qpair failed and we were unable to recover it. 00:25:57.664 [2024-12-11 15:02:40.286475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.664 [2024-12-11 15:02:40.286501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.664 qpair failed and we were unable to recover it. 00:25:57.664 [2024-12-11 15:02:40.286627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.664 [2024-12-11 15:02:40.286653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.664 qpair failed and we were unable to recover it. 00:25:57.664 [2024-12-11 15:02:40.286769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.664 [2024-12-11 15:02:40.286794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.664 qpair failed and we were unable to recover it. 00:25:57.664 [2024-12-11 15:02:40.286877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.664 [2024-12-11 15:02:40.286902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.664 qpair failed and we were unable to recover it. 00:25:57.664 [2024-12-11 15:02:40.286986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.664 [2024-12-11 15:02:40.287011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.664 qpair failed and we were unable to recover it. 00:25:57.664 [2024-12-11 15:02:40.287128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.664 [2024-12-11 15:02:40.287153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.664 qpair failed and we were unable to recover it. 00:25:57.664 [2024-12-11 15:02:40.287240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.664 [2024-12-11 15:02:40.287266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.664 qpair failed and we were unable to recover it. 00:25:57.664 [2024-12-11 15:02:40.287376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.664 [2024-12-11 15:02:40.287401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.664 qpair failed and we were unable to recover it. 00:25:57.664 [2024-12-11 15:02:40.287476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.664 [2024-12-11 15:02:40.287501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.664 qpair failed and we were unable to recover it. 00:25:57.664 [2024-12-11 15:02:40.287618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.664 [2024-12-11 15:02:40.287645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.664 qpair failed and we were unable to recover it. 00:25:57.664 [2024-12-11 15:02:40.287762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.664 [2024-12-11 15:02:40.287787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.664 qpair failed and we were unable to recover it. 00:25:57.664 [2024-12-11 15:02:40.287905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.664 [2024-12-11 15:02:40.287933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.664 qpair failed and we were unable to recover it. 00:25:57.664 [2024-12-11 15:02:40.288024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.664 [2024-12-11 15:02:40.288049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.664 qpair failed and we were unable to recover it. 00:25:57.664 [2024-12-11 15:02:40.288140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.664 [2024-12-11 15:02:40.288166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.664 qpair failed and we were unable to recover it. 00:25:57.664 [2024-12-11 15:02:40.288290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.664 [2024-12-11 15:02:40.288316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.664 qpair failed and we were unable to recover it. 00:25:57.664 [2024-12-11 15:02:40.288434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.664 [2024-12-11 15:02:40.288459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.665 qpair failed and we were unable to recover it. 00:25:57.665 [2024-12-11 15:02:40.288558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.665 [2024-12-11 15:02:40.288585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.665 qpair failed and we were unable to recover it. 00:25:57.665 [2024-12-11 15:02:40.288692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.665 [2024-12-11 15:02:40.288718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.665 qpair failed and we were unable to recover it. 00:25:57.665 [2024-12-11 15:02:40.288803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.665 [2024-12-11 15:02:40.288830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.665 qpair failed and we were unable to recover it. 00:25:57.665 [2024-12-11 15:02:40.288947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.665 [2024-12-11 15:02:40.288977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.665 qpair failed and we were unable to recover it. 00:25:57.665 [2024-12-11 15:02:40.289058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.665 [2024-12-11 15:02:40.289085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.665 qpair failed and we were unable to recover it. 00:25:57.665 [2024-12-11 15:02:40.289175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.665 [2024-12-11 15:02:40.289200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.665 qpair failed and we were unable to recover it. 00:25:57.665 [2024-12-11 15:02:40.289288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.665 [2024-12-11 15:02:40.289314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.665 qpair failed and we were unable to recover it. 00:25:57.665 [2024-12-11 15:02:40.289397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.665 [2024-12-11 15:02:40.289423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.665 qpair failed and we were unable to recover it. 00:25:57.665 [2024-12-11 15:02:40.289515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.665 [2024-12-11 15:02:40.289540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.665 qpair failed and we were unable to recover it. 00:25:57.665 [2024-12-11 15:02:40.289692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.665 [2024-12-11 15:02:40.289718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.665 qpair failed and we were unable to recover it. 00:25:57.665 [2024-12-11 15:02:40.289802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.665 [2024-12-11 15:02:40.289827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.665 qpair failed and we were unable to recover it. 00:25:57.665 [2024-12-11 15:02:40.289900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.665 [2024-12-11 15:02:40.289925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.665 qpair failed and we were unable to recover it. 00:25:57.665 [2024-12-11 15:02:40.290033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.665 [2024-12-11 15:02:40.290059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.665 qpair failed and we were unable to recover it. 00:25:57.665 [2024-12-11 15:02:40.290170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.665 [2024-12-11 15:02:40.290196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.665 qpair failed and we were unable to recover it. 00:25:57.665 [2024-12-11 15:02:40.290272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.665 [2024-12-11 15:02:40.290297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.665 qpair failed and we were unable to recover it. 00:25:57.665 [2024-12-11 15:02:40.290409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.665 [2024-12-11 15:02:40.290435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.665 qpair failed and we were unable to recover it. 00:25:57.665 [2024-12-11 15:02:40.290577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.665 [2024-12-11 15:02:40.290603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.665 qpair failed and we were unable to recover it. 00:25:57.665 [2024-12-11 15:02:40.290728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.665 [2024-12-11 15:02:40.290754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.665 qpair failed and we were unable to recover it. 00:25:57.665 [2024-12-11 15:02:40.290883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.665 [2024-12-11 15:02:40.290921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.665 qpair failed and we were unable to recover it. 00:25:57.665 [2024-12-11 15:02:40.291072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.665 [2024-12-11 15:02:40.291100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.665 qpair failed and we were unable to recover it. 00:25:57.665 [2024-12-11 15:02:40.291188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.665 [2024-12-11 15:02:40.291214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.665 qpair failed and we were unable to recover it. 00:25:57.665 [2024-12-11 15:02:40.291352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.665 [2024-12-11 15:02:40.291379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.665 qpair failed and we were unable to recover it. 00:25:57.665 [2024-12-11 15:02:40.291522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.665 [2024-12-11 15:02:40.291553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.665 qpair failed and we were unable to recover it. 00:25:57.665 [2024-12-11 15:02:40.291642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.665 [2024-12-11 15:02:40.291669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.665 qpair failed and we were unable to recover it. 00:25:57.665 [2024-12-11 15:02:40.291810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.665 [2024-12-11 15:02:40.291858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.665 qpair failed and we were unable to recover it. 00:25:57.665 [2024-12-11 15:02:40.292001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.665 [2024-12-11 15:02:40.292047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.665 qpair failed and we were unable to recover it. 00:25:57.665 [2024-12-11 15:02:40.292219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.665 [2024-12-11 15:02:40.292272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.665 qpair failed and we were unable to recover it. 00:25:57.665 [2024-12-11 15:02:40.292360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.665 [2024-12-11 15:02:40.292385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.665 qpair failed and we were unable to recover it. 00:25:57.665 [2024-12-11 15:02:40.292459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.665 [2024-12-11 15:02:40.292485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.665 qpair failed and we were unable to recover it. 00:25:57.665 [2024-12-11 15:02:40.292574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.665 [2024-12-11 15:02:40.292600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.665 qpair failed and we were unable to recover it. 00:25:57.665 [2024-12-11 15:02:40.292729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.665 [2024-12-11 15:02:40.292780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.665 qpair failed and we were unable to recover it. 00:25:57.665 [2024-12-11 15:02:40.292954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.665 [2024-12-11 15:02:40.293003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.665 qpair failed and we were unable to recover it. 00:25:57.665 [2024-12-11 15:02:40.293178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.665 [2024-12-11 15:02:40.293223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.665 qpair failed and we were unable to recover it. 00:25:57.665 [2024-12-11 15:02:40.293311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.665 [2024-12-11 15:02:40.293339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.665 qpair failed and we were unable to recover it. 00:25:57.665 [2024-12-11 15:02:40.293425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.665 [2024-12-11 15:02:40.293452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.665 qpair failed and we were unable to recover it. 00:25:57.665 [2024-12-11 15:02:40.293595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.665 [2024-12-11 15:02:40.293622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.665 qpair failed and we were unable to recover it. 00:25:57.665 [2024-12-11 15:02:40.293745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.665 [2024-12-11 15:02:40.293770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.665 qpair failed and we were unable to recover it. 00:25:57.665 [2024-12-11 15:02:40.293911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.665 [2024-12-11 15:02:40.293958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.665 qpair failed and we were unable to recover it. 00:25:57.665 [2024-12-11 15:02:40.294109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.665 [2024-12-11 15:02:40.294156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.665 qpair failed and we were unable to recover it. 00:25:57.665 [2024-12-11 15:02:40.294267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.665 [2024-12-11 15:02:40.294293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.665 qpair failed and we were unable to recover it. 00:25:57.665 [2024-12-11 15:02:40.294409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.665 [2024-12-11 15:02:40.294434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.665 qpair failed and we were unable to recover it. 00:25:57.665 [2024-12-11 15:02:40.294521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.665 [2024-12-11 15:02:40.294559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.665 qpair failed and we were unable to recover it. 00:25:57.666 [2024-12-11 15:02:40.294676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.666 [2024-12-11 15:02:40.294702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.666 qpair failed and we were unable to recover it. 00:25:57.666 [2024-12-11 15:02:40.294818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.666 [2024-12-11 15:02:40.294843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.666 qpair failed and we were unable to recover it. 00:25:57.666 [2024-12-11 15:02:40.294941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.666 [2024-12-11 15:02:40.294967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.666 qpair failed and we were unable to recover it. 00:25:57.666 [2024-12-11 15:02:40.295113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.666 [2024-12-11 15:02:40.295140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.666 qpair failed and we were unable to recover it. 00:25:57.666 [2024-12-11 15:02:40.295276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.666 [2024-12-11 15:02:40.295304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.666 qpair failed and we were unable to recover it. 00:25:57.666 [2024-12-11 15:02:40.295418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.666 [2024-12-11 15:02:40.295445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.666 qpair failed and we were unable to recover it. 00:25:57.666 [2024-12-11 15:02:40.295536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.666 [2024-12-11 15:02:40.295568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.666 qpair failed and we were unable to recover it. 00:25:57.666 [2024-12-11 15:02:40.295684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.666 [2024-12-11 15:02:40.295711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.666 qpair failed and we were unable to recover it. 00:25:57.666 [2024-12-11 15:02:40.295836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.666 [2024-12-11 15:02:40.295862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.666 qpair failed and we were unable to recover it. 00:25:57.666 [2024-12-11 15:02:40.295951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.666 [2024-12-11 15:02:40.295977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.666 qpair failed and we were unable to recover it. 00:25:57.666 [2024-12-11 15:02:40.296093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.666 [2024-12-11 15:02:40.296119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.666 qpair failed and we were unable to recover it. 00:25:57.666 [2024-12-11 15:02:40.296259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.666 [2024-12-11 15:02:40.296285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.666 qpair failed and we were unable to recover it. 00:25:57.666 [2024-12-11 15:02:40.296404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.666 [2024-12-11 15:02:40.296430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.666 qpair failed and we were unable to recover it. 00:25:57.666 [2024-12-11 15:02:40.296537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.666 [2024-12-11 15:02:40.296568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.666 qpair failed and we were unable to recover it. 00:25:57.666 [2024-12-11 15:02:40.296695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.666 [2024-12-11 15:02:40.296721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.666 qpair failed and we were unable to recover it. 00:25:57.666 [2024-12-11 15:02:40.296812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.666 [2024-12-11 15:02:40.296840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.666 qpair failed and we were unable to recover it. 00:25:57.666 [2024-12-11 15:02:40.297006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.666 [2024-12-11 15:02:40.297059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.666 qpair failed and we were unable to recover it. 00:25:57.666 [2024-12-11 15:02:40.297193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.666 [2024-12-11 15:02:40.297230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.666 qpair failed and we were unable to recover it. 00:25:57.666 [2024-12-11 15:02:40.297355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.666 [2024-12-11 15:02:40.297381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.666 qpair failed and we were unable to recover it. 00:25:57.666 [2024-12-11 15:02:40.297528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.666 [2024-12-11 15:02:40.297562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.666 qpair failed and we were unable to recover it. 00:25:57.666 [2024-12-11 15:02:40.297704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.666 [2024-12-11 15:02:40.297730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.666 qpair failed and we were unable to recover it. 00:25:57.666 [2024-12-11 15:02:40.297840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.666 [2024-12-11 15:02:40.297865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.666 qpair failed and we were unable to recover it. 00:25:57.666 [2024-12-11 15:02:40.298050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.666 [2024-12-11 15:02:40.298094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.666 qpair failed and we were unable to recover it. 00:25:57.666 [2024-12-11 15:02:40.298243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.666 [2024-12-11 15:02:40.298278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.666 qpair failed and we were unable to recover it. 00:25:57.666 [2024-12-11 15:02:40.298427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.666 [2024-12-11 15:02:40.298452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.666 qpair failed and we were unable to recover it. 00:25:57.666 [2024-12-11 15:02:40.298572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.666 [2024-12-11 15:02:40.298600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.666 qpair failed and we were unable to recover it. 00:25:57.666 [2024-12-11 15:02:40.298687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.666 [2024-12-11 15:02:40.298713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.666 qpair failed and we were unable to recover it. 00:25:57.666 [2024-12-11 15:02:40.298856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.666 [2024-12-11 15:02:40.298896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.666 qpair failed and we were unable to recover it. 00:25:57.666 [2024-12-11 15:02:40.299032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.666 [2024-12-11 15:02:40.299083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.666 qpair failed and we were unable to recover it. 00:25:57.666 [2024-12-11 15:02:40.299214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.666 [2024-12-11 15:02:40.299249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.666 qpair failed and we were unable to recover it. 00:25:57.666 [2024-12-11 15:02:40.299353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.666 [2024-12-11 15:02:40.299378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.666 qpair failed and we were unable to recover it. 00:25:57.666 [2024-12-11 15:02:40.299496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.666 [2024-12-11 15:02:40.299524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.666 qpair failed and we were unable to recover it. 00:25:57.666 [2024-12-11 15:02:40.299689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.666 [2024-12-11 15:02:40.299724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0de0000b90 with addr=10.0.0.2, port=4420 00:25:57.666 qpair failed and we were unable to recover it. 00:25:57.666 [2024-12-11 15:02:40.299848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.666 [2024-12-11 15:02:40.299900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.666 qpair failed and we were unable to recover it. 00:25:57.666 [2024-12-11 15:02:40.300039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.666 [2024-12-11 15:02:40.300087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.666 qpair failed and we were unable to recover it. 00:25:57.666 [2024-12-11 15:02:40.300226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.666 [2024-12-11 15:02:40.300271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.666 qpair failed and we were unable to recover it. 00:25:57.666 [2024-12-11 15:02:40.300389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.666 [2024-12-11 15:02:40.300415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.666 qpair failed and we were unable to recover it. 00:25:57.666 [2024-12-11 15:02:40.300563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.666 [2024-12-11 15:02:40.300602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.666 qpair failed and we were unable to recover it. 00:25:57.666 [2024-12-11 15:02:40.300755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.666 [2024-12-11 15:02:40.300782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.666 qpair failed and we were unable to recover it. 00:25:57.666 [2024-12-11 15:02:40.300966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.666 [2024-12-11 15:02:40.301013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.666 qpair failed and we were unable to recover it. 00:25:57.666 [2024-12-11 15:02:40.301155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.666 [2024-12-11 15:02:40.301209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.666 qpair failed and we were unable to recover it. 00:25:57.666 [2024-12-11 15:02:40.301325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.666 [2024-12-11 15:02:40.301371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.666 qpair failed and we were unable to recover it. 00:25:57.667 [2024-12-11 15:02:40.301466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.667 [2024-12-11 15:02:40.301492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.667 qpair failed and we were unable to recover it. 00:25:57.667 [2024-12-11 15:02:40.301631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.667 [2024-12-11 15:02:40.301658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.667 qpair failed and we were unable to recover it. 00:25:57.667 [2024-12-11 15:02:40.301772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.667 [2024-12-11 15:02:40.301798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.667 qpair failed and we were unable to recover it. 00:25:57.667 [2024-12-11 15:02:40.301939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.667 [2024-12-11 15:02:40.301965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.667 qpair failed and we were unable to recover it. 00:25:57.667 [2024-12-11 15:02:40.302108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.667 [2024-12-11 15:02:40.302159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.667 qpair failed and we were unable to recover it. 00:25:57.667 [2024-12-11 15:02:40.302273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.667 [2024-12-11 15:02:40.302299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.667 qpair failed and we were unable to recover it. 00:25:57.667 [2024-12-11 15:02:40.302415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.667 [2024-12-11 15:02:40.302442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.667 qpair failed and we were unable to recover it. 00:25:57.667 [2024-12-11 15:02:40.302566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.667 [2024-12-11 15:02:40.302606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.667 qpair failed and we were unable to recover it. 00:25:57.667 [2024-12-11 15:02:40.302704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.667 [2024-12-11 15:02:40.302731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.667 qpair failed and we were unable to recover it. 00:25:57.667 [2024-12-11 15:02:40.302816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.667 [2024-12-11 15:02:40.302844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.667 qpair failed and we were unable to recover it. 00:25:57.667 [2024-12-11 15:02:40.302962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.667 [2024-12-11 15:02:40.302990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.667 qpair failed and we were unable to recover it. 00:25:57.667 [2024-12-11 15:02:40.303135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.667 [2024-12-11 15:02:40.303161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.667 qpair failed and we were unable to recover it. 00:25:57.667 [2024-12-11 15:02:40.303275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.667 [2024-12-11 15:02:40.303301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.667 qpair failed and we were unable to recover it. 00:25:57.667 [2024-12-11 15:02:40.303385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.667 [2024-12-11 15:02:40.303416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.667 qpair failed and we were unable to recover it. 00:25:57.667 [2024-12-11 15:02:40.303529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.667 [2024-12-11 15:02:40.303576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.667 qpair failed and we were unable to recover it. 00:25:57.667 [2024-12-11 15:02:40.303703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.667 [2024-12-11 15:02:40.303731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.667 qpair failed and we were unable to recover it. 00:25:57.667 [2024-12-11 15:02:40.303875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.667 [2024-12-11 15:02:40.303911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.667 qpair failed and we were unable to recover it. 00:25:57.667 [2024-12-11 15:02:40.304076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.667 [2024-12-11 15:02:40.304111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.667 qpair failed and we were unable to recover it. 00:25:57.667 [2024-12-11 15:02:40.304225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.667 [2024-12-11 15:02:40.304269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.667 qpair failed and we were unable to recover it. 00:25:57.667 [2024-12-11 15:02:40.304417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.667 [2024-12-11 15:02:40.304451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.667 qpair failed and we were unable to recover it. 00:25:57.667 [2024-12-11 15:02:40.304613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.667 [2024-12-11 15:02:40.304647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.667 qpair failed and we were unable to recover it. 00:25:57.667 [2024-12-11 15:02:40.304790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.667 [2024-12-11 15:02:40.304817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.667 qpair failed and we were unable to recover it. 00:25:57.667 [2024-12-11 15:02:40.304961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.667 [2024-12-11 15:02:40.304996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.667 qpair failed and we were unable to recover it. 00:25:57.667 [2024-12-11 15:02:40.305117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.667 [2024-12-11 15:02:40.305155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.667 qpair failed and we were unable to recover it. 00:25:57.667 [2024-12-11 15:02:40.305294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.667 [2024-12-11 15:02:40.305330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.667 qpair failed and we were unable to recover it. 00:25:57.667 [2024-12-11 15:02:40.305473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.667 [2024-12-11 15:02:40.305498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.667 qpair failed and we were unable to recover it. 00:25:57.667 [2024-12-11 15:02:40.305593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.667 [2024-12-11 15:02:40.305620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.667 qpair failed and we were unable to recover it. 00:25:57.667 [2024-12-11 15:02:40.305756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.667 [2024-12-11 15:02:40.305785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.667 qpair failed and we were unable to recover it. 00:25:57.667 [2024-12-11 15:02:40.305927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.667 [2024-12-11 15:02:40.305953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.667 qpair failed and we were unable to recover it. 00:25:57.667 [2024-12-11 15:02:40.306098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.667 [2024-12-11 15:02:40.306124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.667 qpair failed and we were unable to recover it. 00:25:57.667 [2024-12-11 15:02:40.306263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.667 [2024-12-11 15:02:40.306299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.667 qpair failed and we were unable to recover it. 00:25:57.667 [2024-12-11 15:02:40.306419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.667 [2024-12-11 15:02:40.306456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.667 qpair failed and we were unable to recover it. 00:25:57.667 [2024-12-11 15:02:40.306618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.667 [2024-12-11 15:02:40.306645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.667 qpair failed and we were unable to recover it. 00:25:57.667 [2024-12-11 15:02:40.306721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.667 [2024-12-11 15:02:40.306747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.667 qpair failed and we were unable to recover it. 00:25:57.667 [2024-12-11 15:02:40.306841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.667 [2024-12-11 15:02:40.306869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.667 qpair failed and we were unable to recover it. 00:25:57.667 [2024-12-11 15:02:40.307008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.667 [2024-12-11 15:02:40.307057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.667 qpair failed and we were unable to recover it. 00:25:57.667 [2024-12-11 15:02:40.307206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.667 [2024-12-11 15:02:40.307254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.667 qpair failed and we were unable to recover it. 00:25:57.667 [2024-12-11 15:02:40.307388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.668 [2024-12-11 15:02:40.307445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.668 qpair failed and we were unable to recover it. 00:25:57.668 [2024-12-11 15:02:40.307586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.668 [2024-12-11 15:02:40.307613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.668 qpair failed and we were unable to recover it. 00:25:57.668 [2024-12-11 15:02:40.307728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.668 [2024-12-11 15:02:40.307754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.668 qpair failed and we were unable to recover it. 00:25:57.668 [2024-12-11 15:02:40.307883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.668 [2024-12-11 15:02:40.307935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.668 qpair failed and we were unable to recover it. 00:25:57.668 [2024-12-11 15:02:40.308027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.668 [2024-12-11 15:02:40.308052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.668 qpair failed and we were unable to recover it. 00:25:57.668 [2024-12-11 15:02:40.308193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.668 [2024-12-11 15:02:40.308219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.668 qpair failed and we were unable to recover it. 00:25:57.668 [2024-12-11 15:02:40.308338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.668 [2024-12-11 15:02:40.308363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.668 qpair failed and we were unable to recover it. 00:25:57.668 [2024-12-11 15:02:40.308441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.668 [2024-12-11 15:02:40.308466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.668 qpair failed and we were unable to recover it. 00:25:57.668 [2024-12-11 15:02:40.308605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.668 [2024-12-11 15:02:40.308632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.668 qpair failed and we were unable to recover it. 00:25:57.668 [2024-12-11 15:02:40.308772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.668 [2024-12-11 15:02:40.308797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.668 qpair failed and we were unable to recover it. 00:25:57.668 [2024-12-11 15:02:40.308875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.668 [2024-12-11 15:02:40.308900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.668 qpair failed and we were unable to recover it. 00:25:57.668 [2024-12-11 15:02:40.309008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.668 [2024-12-11 15:02:40.309033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.668 qpair failed and we were unable to recover it. 00:25:57.668 [2024-12-11 15:02:40.309147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.668 [2024-12-11 15:02:40.309172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.668 qpair failed and we were unable to recover it. 00:25:57.668 [2024-12-11 15:02:40.309296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.668 [2024-12-11 15:02:40.309321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.668 qpair failed and we were unable to recover it. 00:25:57.668 [2024-12-11 15:02:40.309412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.668 [2024-12-11 15:02:40.309438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.668 qpair failed and we were unable to recover it. 00:25:57.668 [2024-12-11 15:02:40.309528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.668 [2024-12-11 15:02:40.309564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.668 qpair failed and we were unable to recover it. 00:25:57.668 [2024-12-11 15:02:40.309674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.668 [2024-12-11 15:02:40.309699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.668 qpair failed and we were unable to recover it. 00:25:57.668 [2024-12-11 15:02:40.309824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.668 [2024-12-11 15:02:40.309853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.668 qpair failed and we were unable to recover it. 00:25:57.668 [2024-12-11 15:02:40.309971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.668 [2024-12-11 15:02:40.309997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.668 qpair failed and we were unable to recover it. 00:25:57.668 [2024-12-11 15:02:40.310135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.668 [2024-12-11 15:02:40.310163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.668 qpair failed and we were unable to recover it. 00:25:57.668 [2024-12-11 15:02:40.310303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.668 [2024-12-11 15:02:40.310329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.668 qpair failed and we were unable to recover it. 00:25:57.668 [2024-12-11 15:02:40.310458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.668 [2024-12-11 15:02:40.310485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.668 qpair failed and we were unable to recover it. 00:25:57.668 [2024-12-11 15:02:40.310574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.668 [2024-12-11 15:02:40.310600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.668 qpair failed and we were unable to recover it. 00:25:57.668 [2024-12-11 15:02:40.310715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.668 [2024-12-11 15:02:40.310741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.668 qpair failed and we were unable to recover it. 00:25:57.668 [2024-12-11 15:02:40.310879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.668 [2024-12-11 15:02:40.310905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.668 qpair failed and we were unable to recover it. 00:25:57.668 [2024-12-11 15:02:40.311045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.668 [2024-12-11 15:02:40.311082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.668 qpair failed and we were unable to recover it. 00:25:57.668 [2024-12-11 15:02:40.311240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.668 [2024-12-11 15:02:40.311287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.668 qpair failed and we were unable to recover it. 00:25:57.668 [2024-12-11 15:02:40.311403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.668 [2024-12-11 15:02:40.311428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.668 qpair failed and we were unable to recover it. 00:25:57.668 [2024-12-11 15:02:40.311534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.668 [2024-12-11 15:02:40.311568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.668 qpair failed and we were unable to recover it. 00:25:57.668 [2024-12-11 15:02:40.311647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.668 [2024-12-11 15:02:40.311673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.668 qpair failed and we were unable to recover it. 00:25:57.668 [2024-12-11 15:02:40.311758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.668 [2024-12-11 15:02:40.311787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.668 qpair failed and we were unable to recover it. 00:25:57.668 [2024-12-11 15:02:40.311917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.668 [2024-12-11 15:02:40.311952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.668 qpair failed and we were unable to recover it. 00:25:57.668 [2024-12-11 15:02:40.312106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.668 [2024-12-11 15:02:40.312132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.668 qpair failed and we were unable to recover it. 00:25:57.668 [2024-12-11 15:02:40.312249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.668 [2024-12-11 15:02:40.312274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.668 qpair failed and we were unable to recover it. 00:25:57.668 [2024-12-11 15:02:40.312386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.668 [2024-12-11 15:02:40.312411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.668 qpair failed and we were unable to recover it. 00:25:57.668 [2024-12-11 15:02:40.312504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.668 [2024-12-11 15:02:40.312529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.668 qpair failed and we were unable to recover it. 00:25:57.668 [2024-12-11 15:02:40.312627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.668 [2024-12-11 15:02:40.312653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.668 qpair failed and we were unable to recover it. 00:25:57.668 [2024-12-11 15:02:40.312765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.668 [2024-12-11 15:02:40.312790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.668 qpair failed and we were unable to recover it. 00:25:57.668 [2024-12-11 15:02:40.312903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.668 [2024-12-11 15:02:40.312928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.668 qpair failed and we were unable to recover it. 00:25:57.668 [2024-12-11 15:02:40.313048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.668 [2024-12-11 15:02:40.313073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.668 qpair failed and we were unable to recover it. 00:25:57.668 [2024-12-11 15:02:40.313145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.668 [2024-12-11 15:02:40.313170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.668 qpair failed and we were unable to recover it. 00:25:57.668 [2024-12-11 15:02:40.313252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.668 [2024-12-11 15:02:40.313282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.668 qpair failed and we were unable to recover it. 00:25:57.668 [2024-12-11 15:02:40.313448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.668 [2024-12-11 15:02:40.313476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.668 qpair failed and we were unable to recover it. 00:25:57.668 [2024-12-11 15:02:40.313595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.668 [2024-12-11 15:02:40.313622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.668 qpair failed and we were unable to recover it. 00:25:57.668 [2024-12-11 15:02:40.313716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.668 [2024-12-11 15:02:40.313742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.668 qpair failed and we were unable to recover it. 00:25:57.668 [2024-12-11 15:02:40.313835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.668 [2024-12-11 15:02:40.313862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.669 qpair failed and we were unable to recover it. 00:25:57.669 [2024-12-11 15:02:40.314004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.669 [2024-12-11 15:02:40.314030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.669 qpair failed and we were unable to recover it. 00:25:57.669 [2024-12-11 15:02:40.314169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.669 [2024-12-11 15:02:40.314194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.669 qpair failed and we were unable to recover it. 00:25:57.669 [2024-12-11 15:02:40.314284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.669 [2024-12-11 15:02:40.314310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.669 qpair failed and we were unable to recover it. 00:25:57.669 [2024-12-11 15:02:40.314403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.669 [2024-12-11 15:02:40.314429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.669 qpair failed and we were unable to recover it. 00:25:57.669 [2024-12-11 15:02:40.314543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.669 [2024-12-11 15:02:40.314577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.669 qpair failed and we were unable to recover it. 00:25:57.669 [2024-12-11 15:02:40.314716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.669 [2024-12-11 15:02:40.314761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.669 qpair failed and we were unable to recover it. 00:25:57.669 [2024-12-11 15:02:40.314899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.669 [2024-12-11 15:02:40.314946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.669 qpair failed and we were unable to recover it. 00:25:57.669 [2024-12-11 15:02:40.315075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.669 [2024-12-11 15:02:40.315125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.669 qpair failed and we were unable to recover it. 00:25:57.669 [2024-12-11 15:02:40.315263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.669 [2024-12-11 15:02:40.315311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.669 qpair failed and we were unable to recover it. 00:25:57.669 [2024-12-11 15:02:40.315431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.669 [2024-12-11 15:02:40.315456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.669 qpair failed and we were unable to recover it. 00:25:57.669 [2024-12-11 15:02:40.315566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.669 [2024-12-11 15:02:40.315593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.669 qpair failed and we were unable to recover it. 00:25:57.669 [2024-12-11 15:02:40.315697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.669 [2024-12-11 15:02:40.315752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.669 qpair failed and we were unable to recover it. 00:25:57.669 [2024-12-11 15:02:40.315887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.669 [2024-12-11 15:02:40.315933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.669 qpair failed and we were unable to recover it. 00:25:57.669 [2024-12-11 15:02:40.316067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.669 [2024-12-11 15:02:40.316116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.669 qpair failed and we were unable to recover it. 00:25:57.669 [2024-12-11 15:02:40.316237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.669 [2024-12-11 15:02:40.316262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.669 qpair failed and we were unable to recover it. 00:25:57.669 [2024-12-11 15:02:40.316384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.669 [2024-12-11 15:02:40.316413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.669 qpair failed and we were unable to recover it. 00:25:57.669 [2024-12-11 15:02:40.316536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.669 [2024-12-11 15:02:40.316595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.669 qpair failed and we were unable to recover it. 00:25:57.669 [2024-12-11 15:02:40.316764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.669 [2024-12-11 15:02:40.316800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.669 qpair failed and we were unable to recover it. 00:25:57.669 [2024-12-11 15:02:40.316979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.669 [2024-12-11 15:02:40.317014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.669 qpair failed and we were unable to recover it. 00:25:57.669 [2024-12-11 15:02:40.317159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.669 [2024-12-11 15:02:40.317196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.669 qpair failed and we were unable to recover it. 00:25:57.669 [2024-12-11 15:02:40.317383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.669 [2024-12-11 15:02:40.317418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.669 qpair failed and we were unable to recover it. 00:25:57.669 [2024-12-11 15:02:40.317596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.669 [2024-12-11 15:02:40.317624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.669 qpair failed and we were unable to recover it. 00:25:57.669 [2024-12-11 15:02:40.317745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.669 [2024-12-11 15:02:40.317772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.669 qpair failed and we were unable to recover it. 00:25:57.669 [2024-12-11 15:02:40.317920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.669 [2024-12-11 15:02:40.317956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.669 qpair failed and we were unable to recover it. 00:25:57.669 [2024-12-11 15:02:40.318070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.669 [2024-12-11 15:02:40.318106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.669 qpair failed and we were unable to recover it. 00:25:57.669 [2024-12-11 15:02:40.318293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.669 [2024-12-11 15:02:40.318328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.669 qpair failed and we were unable to recover it. 00:25:57.669 [2024-12-11 15:02:40.318585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.669 [2024-12-11 15:02:40.318612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.669 qpair failed and we were unable to recover it. 00:25:57.669 [2024-12-11 15:02:40.318704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.669 [2024-12-11 15:02:40.318731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.669 qpair failed and we were unable to recover it. 00:25:57.669 [2024-12-11 15:02:40.318890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.669 [2024-12-11 15:02:40.318917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.669 qpair failed and we were unable to recover it. 00:25:57.669 [2024-12-11 15:02:40.319014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.669 [2024-12-11 15:02:40.319040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.669 qpair failed and we were unable to recover it. 00:25:57.669 [2024-12-11 15:02:40.319155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.669 [2024-12-11 15:02:40.319181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.669 qpair failed and we were unable to recover it. 00:25:57.669 [2024-12-11 15:02:40.319347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.669 [2024-12-11 15:02:40.319382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.669 qpair failed and we were unable to recover it. 00:25:57.669 [2024-12-11 15:02:40.319530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.669 [2024-12-11 15:02:40.319593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.669 qpair failed and we were unable to recover it. 00:25:57.669 [2024-12-11 15:02:40.319742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.669 [2024-12-11 15:02:40.319768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.669 qpair failed and we were unable to recover it. 00:25:57.669 [2024-12-11 15:02:40.319863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.669 [2024-12-11 15:02:40.319891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.669 qpair failed and we were unable to recover it. 00:25:57.669 [2024-12-11 15:02:40.320065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.669 [2024-12-11 15:02:40.320101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.669 qpair failed and we were unable to recover it. 00:25:57.669 [2024-12-11 15:02:40.320290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.669 [2024-12-11 15:02:40.320328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.669 qpair failed and we were unable to recover it. 00:25:57.669 [2024-12-11 15:02:40.320477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.669 [2024-12-11 15:02:40.320514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.669 qpair failed and we were unable to recover it. 00:25:57.669 [2024-12-11 15:02:40.320684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.669 [2024-12-11 15:02:40.320724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.669 qpair failed and we were unable to recover it. 00:25:57.669 [2024-12-11 15:02:40.320848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.669 [2024-12-11 15:02:40.320876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.669 qpair failed and we were unable to recover it. 00:25:57.669 [2024-12-11 15:02:40.321022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.669 [2024-12-11 15:02:40.321070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.669 qpair failed and we were unable to recover it. 00:25:57.669 [2024-12-11 15:02:40.321247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.669 [2024-12-11 15:02:40.321295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.669 qpair failed and we were unable to recover it. 00:25:57.669 [2024-12-11 15:02:40.321381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.669 [2024-12-11 15:02:40.321406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.669 qpair failed and we were unable to recover it. 00:25:57.669 [2024-12-11 15:02:40.321517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.669 [2024-12-11 15:02:40.321542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.669 qpair failed and we were unable to recover it. 00:25:57.669 [2024-12-11 15:02:40.321657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.669 [2024-12-11 15:02:40.321682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.669 qpair failed and we were unable to recover it. 00:25:57.669 [2024-12-11 15:02:40.321825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.670 [2024-12-11 15:02:40.321851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.670 qpair failed and we were unable to recover it. 00:25:57.670 [2024-12-11 15:02:40.322029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.670 [2024-12-11 15:02:40.322080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.670 qpair failed and we were unable to recover it. 00:25:57.670 [2024-12-11 15:02:40.322192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.670 [2024-12-11 15:02:40.322239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.670 qpair failed and we were unable to recover it. 00:25:57.670 [2024-12-11 15:02:40.322378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.670 [2024-12-11 15:02:40.322403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.670 qpair failed and we were unable to recover it. 00:25:57.670 [2024-12-11 15:02:40.322487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.670 [2024-12-11 15:02:40.322512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.670 qpair failed and we were unable to recover it. 00:25:57.670 [2024-12-11 15:02:40.322690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.670 [2024-12-11 15:02:40.322739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.670 qpair failed and we were unable to recover it. 00:25:57.670 [2024-12-11 15:02:40.322875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.670 [2024-12-11 15:02:40.322928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.670 qpair failed and we were unable to recover it. 00:25:57.670 [2024-12-11 15:02:40.323063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.670 [2024-12-11 15:02:40.323112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.670 qpair failed and we were unable to recover it. 00:25:57.670 [2024-12-11 15:02:40.323195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.670 [2024-12-11 15:02:40.323220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.670 qpair failed and we were unable to recover it. 00:25:57.670 [2024-12-11 15:02:40.323330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.670 [2024-12-11 15:02:40.323356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.670 qpair failed and we were unable to recover it. 00:25:57.670 [2024-12-11 15:02:40.323494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.670 [2024-12-11 15:02:40.323519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.670 qpair failed and we were unable to recover it. 00:25:57.670 [2024-12-11 15:02:40.323666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.670 [2024-12-11 15:02:40.323692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.670 qpair failed and we were unable to recover it. 00:25:57.670 [2024-12-11 15:02:40.323830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.670 [2024-12-11 15:02:40.323855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.670 qpair failed and we were unable to recover it. 00:25:57.670 [2024-12-11 15:02:40.323972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.670 [2024-12-11 15:02:40.323997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.670 qpair failed and we were unable to recover it. 00:25:57.670 [2024-12-11 15:02:40.324081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.670 [2024-12-11 15:02:40.324106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.670 qpair failed and we were unable to recover it. 00:25:57.670 [2024-12-11 15:02:40.324213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.670 [2024-12-11 15:02:40.324238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.670 qpair failed and we were unable to recover it. 00:25:57.670 [2024-12-11 15:02:40.324353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.670 [2024-12-11 15:02:40.324378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.670 qpair failed and we were unable to recover it. 00:25:57.670 [2024-12-11 15:02:40.324490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.670 [2024-12-11 15:02:40.324514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.670 qpair failed and we were unable to recover it. 00:25:57.670 [2024-12-11 15:02:40.324610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.670 [2024-12-11 15:02:40.324635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.670 qpair failed and we were unable to recover it. 00:25:57.670 [2024-12-11 15:02:40.324729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.670 [2024-12-11 15:02:40.324754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.670 qpair failed and we were unable to recover it. 00:25:57.670 [2024-12-11 15:02:40.324888] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c4f30 is same with the state(6) to be set 00:25:57.670 [2024-12-11 15:02:40.325040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.670 [2024-12-11 15:02:40.325079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.670 qpair failed and we were unable to recover it. 00:25:57.670 [2024-12-11 15:02:40.325179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.670 [2024-12-11 15:02:40.325207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.670 qpair failed and we were unable to recover it. 00:25:57.670 [2024-12-11 15:02:40.325295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.670 [2024-12-11 15:02:40.325331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.670 qpair failed and we were unable to recover it. 00:25:57.670 [2024-12-11 15:02:40.325446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.670 [2024-12-11 15:02:40.325472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.670 qpair failed and we were unable to recover it. 00:25:57.670 [2024-12-11 15:02:40.325630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.670 [2024-12-11 15:02:40.325658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.670 qpair failed and we were unable to recover it. 00:25:57.670 [2024-12-11 15:02:40.325739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.670 [2024-12-11 15:02:40.325766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.670 qpair failed and we were unable to recover it. 00:25:57.670 [2024-12-11 15:02:40.325917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.670 [2024-12-11 15:02:40.325967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.670 qpair failed and we were unable to recover it. 00:25:57.670 [2024-12-11 15:02:40.326062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.670 [2024-12-11 15:02:40.326087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.670 qpair failed and we were unable to recover it. 00:25:57.670 [2024-12-11 15:02:40.326237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.670 [2024-12-11 15:02:40.326280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.670 qpair failed and we were unable to recover it. 00:25:57.670 [2024-12-11 15:02:40.326396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.670 [2024-12-11 15:02:40.326421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.670 qpair failed and we were unable to recover it. 00:25:57.670 [2024-12-11 15:02:40.326522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.670 [2024-12-11 15:02:40.326559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.670 qpair failed and we were unable to recover it. 00:25:57.670 [2024-12-11 15:02:40.326670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.670 [2024-12-11 15:02:40.326718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.670 qpair failed and we were unable to recover it. 00:25:57.670 [2024-12-11 15:02:40.326797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.670 [2024-12-11 15:02:40.326821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.670 qpair failed and we were unable to recover it. 00:25:57.670 [2024-12-11 15:02:40.326933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.670 [2024-12-11 15:02:40.326958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.670 qpair failed and we were unable to recover it. 00:25:57.670 [2024-12-11 15:02:40.327076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.670 [2024-12-11 15:02:40.327100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.670 qpair failed and we were unable to recover it. 00:25:57.670 [2024-12-11 15:02:40.327192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.670 [2024-12-11 15:02:40.327221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.670 qpair failed and we were unable to recover it. 00:25:57.670 [2024-12-11 15:02:40.327328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.670 [2024-12-11 15:02:40.327368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.670 qpair failed and we were unable to recover it. 00:25:57.670 [2024-12-11 15:02:40.327484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.670 [2024-12-11 15:02:40.327512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.670 qpair failed and we were unable to recover it. 00:25:57.670 [2024-12-11 15:02:40.327636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.670 [2024-12-11 15:02:40.327664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.670 qpair failed and we were unable to recover it. 00:25:57.670 [2024-12-11 15:02:40.327779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.670 [2024-12-11 15:02:40.327805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.670 qpair failed and we were unable to recover it. 00:25:57.670 [2024-12-11 15:02:40.327884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.670 [2024-12-11 15:02:40.327910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.670 qpair failed and we were unable to recover it. 00:25:57.670 [2024-12-11 15:02:40.328030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.670 [2024-12-11 15:02:40.328057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.670 qpair failed and we were unable to recover it. 00:25:57.670 [2024-12-11 15:02:40.328173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.670 [2024-12-11 15:02:40.328221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.670 qpair failed and we were unable to recover it. 00:25:57.670 [2024-12-11 15:02:40.328335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.670 [2024-12-11 15:02:40.328360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.670 qpair failed and we were unable to recover it. 00:25:57.670 [2024-12-11 15:02:40.328477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.670 [2024-12-11 15:02:40.328502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.670 qpair failed and we were unable to recover it. 00:25:57.670 [2024-12-11 15:02:40.328650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.671 [2024-12-11 15:02:40.328698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.671 qpair failed and we were unable to recover it. 00:25:57.671 [2024-12-11 15:02:40.328832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.671 [2024-12-11 15:02:40.328887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.671 qpair failed and we were unable to recover it. 00:25:57.671 [2024-12-11 15:02:40.329006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.671 [2024-12-11 15:02:40.329056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.671 qpair failed and we were unable to recover it. 00:25:57.671 [2024-12-11 15:02:40.329171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.671 [2024-12-11 15:02:40.329196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.671 qpair failed and we were unable to recover it. 00:25:57.671 [2024-12-11 15:02:40.329311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.671 [2024-12-11 15:02:40.329336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.671 qpair failed and we were unable to recover it. 00:25:57.671 [2024-12-11 15:02:40.329417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.671 [2024-12-11 15:02:40.329442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.671 qpair failed and we were unable to recover it. 00:25:57.671 [2024-12-11 15:02:40.329533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.671 [2024-12-11 15:02:40.329569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.671 qpair failed and we were unable to recover it. 00:25:57.671 [2024-12-11 15:02:40.329684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.671 [2024-12-11 15:02:40.329708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.671 qpair failed and we were unable to recover it. 00:25:57.671 [2024-12-11 15:02:40.329830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.671 [2024-12-11 15:02:40.329856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.671 qpair failed and we were unable to recover it. 00:25:57.671 [2024-12-11 15:02:40.329977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.671 [2024-12-11 15:02:40.330002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.671 qpair failed and we were unable to recover it. 00:25:57.671 [2024-12-11 15:02:40.330082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.671 [2024-12-11 15:02:40.330107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.671 qpair failed and we were unable to recover it. 00:25:57.671 [2024-12-11 15:02:40.330231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.671 [2024-12-11 15:02:40.330256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.671 qpair failed and we were unable to recover it. 00:25:57.671 [2024-12-11 15:02:40.330371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.671 [2024-12-11 15:02:40.330396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.671 qpair failed and we were unable to recover it. 00:25:57.671 [2024-12-11 15:02:40.330514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.671 [2024-12-11 15:02:40.330539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.671 qpair failed and we were unable to recover it. 00:25:57.671 [2024-12-11 15:02:40.330669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.671 [2024-12-11 15:02:40.330695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.671 qpair failed and we were unable to recover it. 00:25:57.671 [2024-12-11 15:02:40.330816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.671 [2024-12-11 15:02:40.330842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.671 qpair failed and we were unable to recover it. 00:25:57.671 [2024-12-11 15:02:40.330931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.671 [2024-12-11 15:02:40.330955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.671 qpair failed and we were unable to recover it. 00:25:57.671 [2024-12-11 15:02:40.331037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.671 [2024-12-11 15:02:40.331062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.671 qpair failed and we were unable to recover it. 00:25:57.671 [2024-12-11 15:02:40.331201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.671 [2024-12-11 15:02:40.331226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.671 qpair failed and we were unable to recover it. 00:25:57.671 [2024-12-11 15:02:40.331335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.671 [2024-12-11 15:02:40.331360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.671 qpair failed and we were unable to recover it. 00:25:57.671 [2024-12-11 15:02:40.331475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.671 [2024-12-11 15:02:40.331499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.671 qpair failed and we were unable to recover it. 00:25:57.671 [2024-12-11 15:02:40.331597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.671 [2024-12-11 15:02:40.331623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.671 qpair failed and we were unable to recover it. 00:25:57.671 [2024-12-11 15:02:40.331743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.671 [2024-12-11 15:02:40.331769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.671 qpair failed and we were unable to recover it. 00:25:57.671 [2024-12-11 15:02:40.331887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.671 [2024-12-11 15:02:40.331913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.671 qpair failed and we were unable to recover it. 00:25:57.671 [2024-12-11 15:02:40.331998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.671 [2024-12-11 15:02:40.332023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.671 qpair failed and we were unable to recover it. 00:25:57.671 [2024-12-11 15:02:40.332133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.671 [2024-12-11 15:02:40.332159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.671 qpair failed and we were unable to recover it. 00:25:57.671 [2024-12-11 15:02:40.332241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.671 [2024-12-11 15:02:40.332266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.671 qpair failed and we were unable to recover it. 00:25:57.671 [2024-12-11 15:02:40.332384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.671 [2024-12-11 15:02:40.332411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.671 qpair failed and we were unable to recover it. 00:25:57.671 [2024-12-11 15:02:40.332518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.671 [2024-12-11 15:02:40.332543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.671 qpair failed and we were unable to recover it. 00:25:57.671 [2024-12-11 15:02:40.332668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.671 [2024-12-11 15:02:40.332693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.671 qpair failed and we were unable to recover it. 00:25:57.671 [2024-12-11 15:02:40.332812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.671 [2024-12-11 15:02:40.332837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.671 qpair failed and we were unable to recover it. 00:25:57.671 [2024-12-11 15:02:40.332921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.671 [2024-12-11 15:02:40.332947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.671 qpair failed and we were unable to recover it. 00:25:57.671 [2024-12-11 15:02:40.333039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.671 [2024-12-11 15:02:40.333063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.671 qpair failed and we were unable to recover it. 00:25:57.671 [2024-12-11 15:02:40.333148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.671 [2024-12-11 15:02:40.333174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.671 qpair failed and we were unable to recover it. 00:25:57.671 [2024-12-11 15:02:40.333260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.671 [2024-12-11 15:02:40.333285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.671 qpair failed and we were unable to recover it. 00:25:57.671 [2024-12-11 15:02:40.333375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.671 [2024-12-11 15:02:40.333400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.671 qpair failed and we were unable to recover it. 00:25:57.671 [2024-12-11 15:02:40.333494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.671 [2024-12-11 15:02:40.333531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.671 qpair failed and we were unable to recover it. 00:25:57.671 [2024-12-11 15:02:40.333672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.671 [2024-12-11 15:02:40.333700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.671 qpair failed and we were unable to recover it. 00:25:57.671 [2024-12-11 15:02:40.333781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.671 [2024-12-11 15:02:40.333809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.671 qpair failed and we were unable to recover it. 00:25:57.671 [2024-12-11 15:02:40.333949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.671 [2024-12-11 15:02:40.333985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.671 qpair failed and we were unable to recover it. 00:25:57.671 [2024-12-11 15:02:40.334080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.671 [2024-12-11 15:02:40.334108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.671 qpair failed and we were unable to recover it. 00:25:57.671 [2024-12-11 15:02:40.334193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.671 [2024-12-11 15:02:40.334219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.671 qpair failed and we were unable to recover it. 00:25:57.671 [2024-12-11 15:02:40.334343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.671 [2024-12-11 15:02:40.334382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.671 qpair failed and we were unable to recover it. 00:25:57.672 [2024-12-11 15:02:40.334482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.672 [2024-12-11 15:02:40.334509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.672 qpair failed and we were unable to recover it. 00:25:57.672 [2024-12-11 15:02:40.334631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.672 [2024-12-11 15:02:40.334656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.672 qpair failed and we were unable to recover it. 00:25:57.672 [2024-12-11 15:02:40.334755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.672 [2024-12-11 15:02:40.334788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.672 qpair failed and we were unable to recover it. 00:25:57.672 [2024-12-11 15:02:40.334932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.672 [2024-12-11 15:02:40.334967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.672 qpair failed and we were unable to recover it. 00:25:57.672 [2024-12-11 15:02:40.335126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.672 [2024-12-11 15:02:40.335172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.672 qpair failed and we were unable to recover it. 00:25:57.672 [2024-12-11 15:02:40.335257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.672 [2024-12-11 15:02:40.335281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.672 qpair failed and we were unable to recover it. 00:25:57.672 [2024-12-11 15:02:40.335369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.672 [2024-12-11 15:02:40.335393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.672 qpair failed and we were unable to recover it. 00:25:57.672 [2024-12-11 15:02:40.335535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.672 [2024-12-11 15:02:40.335566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.672 qpair failed and we were unable to recover it. 00:25:57.672 [2024-12-11 15:02:40.335680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.672 [2024-12-11 15:02:40.335705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.672 qpair failed and we were unable to recover it. 00:25:57.672 [2024-12-11 15:02:40.335804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.672 [2024-12-11 15:02:40.335834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.672 qpair failed and we were unable to recover it. 00:25:57.672 [2024-12-11 15:02:40.335923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.672 [2024-12-11 15:02:40.335949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.672 qpair failed and we were unable to recover it. 00:25:57.672 [2024-12-11 15:02:40.336068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.672 [2024-12-11 15:02:40.336095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.672 qpair failed and we were unable to recover it. 00:25:57.672 [2024-12-11 15:02:40.336211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.672 [2024-12-11 15:02:40.336237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.672 qpair failed and we were unable to recover it. 00:25:57.672 [2024-12-11 15:02:40.336338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.672 [2024-12-11 15:02:40.336365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.672 qpair failed and we were unable to recover it. 00:25:57.672 [2024-12-11 15:02:40.336456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.672 [2024-12-11 15:02:40.336483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.672 qpair failed and we were unable to recover it. 00:25:57.672 [2024-12-11 15:02:40.336596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.672 [2024-12-11 15:02:40.336624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.672 qpair failed and we were unable to recover it. 00:25:57.672 [2024-12-11 15:02:40.336708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.672 [2024-12-11 15:02:40.336733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.672 qpair failed and we were unable to recover it. 00:25:57.672 [2024-12-11 15:02:40.336812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.672 [2024-12-11 15:02:40.336858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.672 qpair failed and we were unable to recover it. 00:25:57.672 [2024-12-11 15:02:40.337072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.672 [2024-12-11 15:02:40.337108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.672 qpair failed and we were unable to recover it. 00:25:57.672 [2024-12-11 15:02:40.337263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.672 [2024-12-11 15:02:40.337300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.672 qpair failed and we were unable to recover it. 00:25:57.672 [2024-12-11 15:02:40.337418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.672 [2024-12-11 15:02:40.337454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.672 qpair failed and we were unable to recover it. 00:25:57.672 [2024-12-11 15:02:40.337624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.672 [2024-12-11 15:02:40.337663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.672 qpair failed and we were unable to recover it. 00:25:57.672 [2024-12-11 15:02:40.337813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.672 [2024-12-11 15:02:40.337862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.672 qpair failed and we were unable to recover it. 00:25:57.672 [2024-12-11 15:02:40.338099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.672 [2024-12-11 15:02:40.338152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.672 qpair failed and we were unable to recover it. 00:25:57.672 [2024-12-11 15:02:40.338259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.672 [2024-12-11 15:02:40.338310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.672 qpair failed and we were unable to recover it. 00:25:57.672 [2024-12-11 15:02:40.338431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.672 [2024-12-11 15:02:40.338456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.672 qpair failed and we were unable to recover it. 00:25:57.672 [2024-12-11 15:02:40.338570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.672 [2024-12-11 15:02:40.338596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.672 qpair failed and we were unable to recover it. 00:25:57.672 [2024-12-11 15:02:40.338693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.672 [2024-12-11 15:02:40.338726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.672 qpair failed and we were unable to recover it. 00:25:57.672 [2024-12-11 15:02:40.338862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.672 [2024-12-11 15:02:40.338886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.672 qpair failed and we were unable to recover it. 00:25:57.672 [2024-12-11 15:02:40.339023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.672 [2024-12-11 15:02:40.339070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.672 qpair failed and we were unable to recover it. 00:25:57.672 [2024-12-11 15:02:40.339218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.672 [2024-12-11 15:02:40.339255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.672 qpair failed and we were unable to recover it. 00:25:57.672 [2024-12-11 15:02:40.339370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.672 [2024-12-11 15:02:40.339407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.672 qpair failed and we were unable to recover it. 00:25:57.672 [2024-12-11 15:02:40.339557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.672 [2024-12-11 15:02:40.339594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.672 qpair failed and we were unable to recover it. 00:25:57.672 [2024-12-11 15:02:40.339732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.672 [2024-12-11 15:02:40.339767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.672 qpair failed and we were unable to recover it. 00:25:57.672 [2024-12-11 15:02:40.339882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.672 [2024-12-11 15:02:40.339917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.672 qpair failed and we were unable to recover it. 00:25:57.672 [2024-12-11 15:02:40.340059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.672 [2024-12-11 15:02:40.340096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.672 qpair failed and we were unable to recover it. 00:25:57.672 [2024-12-11 15:02:40.340238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.672 [2024-12-11 15:02:40.340273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.672 qpair failed and we were unable to recover it. 00:25:57.672 [2024-12-11 15:02:40.340396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.672 [2024-12-11 15:02:40.340431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.672 qpair failed and we were unable to recover it. 00:25:57.672 [2024-12-11 15:02:40.340536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.672 [2024-12-11 15:02:40.340599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.672 qpair failed and we were unable to recover it. 00:25:57.672 [2024-12-11 15:02:40.340715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.672 [2024-12-11 15:02:40.340746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.672 qpair failed and we were unable to recover it. 00:25:57.672 [2024-12-11 15:02:40.340887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.672 [2024-12-11 15:02:40.340938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.672 qpair failed and we were unable to recover it. 00:25:57.672 [2024-12-11 15:02:40.341124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.672 [2024-12-11 15:02:40.341190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.672 qpair failed and we were unable to recover it. 00:25:57.672 [2024-12-11 15:02:40.341381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.672 [2024-12-11 15:02:40.341445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.672 qpair failed and we were unable to recover it. 00:25:57.672 [2024-12-11 15:02:40.341585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.672 [2024-12-11 15:02:40.341636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.672 qpair failed and we were unable to recover it. 00:25:57.672 [2024-12-11 15:02:40.341727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.672 [2024-12-11 15:02:40.341753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.672 qpair failed and we were unable to recover it. 00:25:57.672 [2024-12-11 15:02:40.341850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.672 [2024-12-11 15:02:40.341876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.672 qpair failed and we were unable to recover it. 00:25:57.672 [2024-12-11 15:02:40.342014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.672 [2024-12-11 15:02:40.342040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.672 qpair failed and we were unable to recover it. 00:25:57.672 [2024-12-11 15:02:40.342224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.672 [2024-12-11 15:02:40.342289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.672 qpair failed and we were unable to recover it. 00:25:57.672 [2024-12-11 15:02:40.342460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.672 [2024-12-11 15:02:40.342485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.672 qpair failed and we were unable to recover it. 00:25:57.673 [2024-12-11 15:02:40.342577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.673 [2024-12-11 15:02:40.342604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.673 qpair failed and we were unable to recover it. 00:25:57.673 [2024-12-11 15:02:40.342690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.673 [2024-12-11 15:02:40.342716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.673 qpair failed and we were unable to recover it. 00:25:57.673 [2024-12-11 15:02:40.342801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.673 [2024-12-11 15:02:40.342828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.673 qpair failed and we were unable to recover it. 00:25:57.673 [2024-12-11 15:02:40.342944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.673 [2024-12-11 15:02:40.342970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.673 qpair failed and we were unable to recover it. 00:25:57.673 [2024-12-11 15:02:40.343088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.673 [2024-12-11 15:02:40.343114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.673 qpair failed and we were unable to recover it. 00:25:57.673 [2024-12-11 15:02:40.343237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.673 [2024-12-11 15:02:40.343273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.673 qpair failed and we were unable to recover it. 00:25:57.673 [2024-12-11 15:02:40.343421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.673 [2024-12-11 15:02:40.343456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.673 qpair failed and we were unable to recover it. 00:25:57.673 [2024-12-11 15:02:40.343594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.673 [2024-12-11 15:02:40.343621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.673 qpair failed and we were unable to recover it. 00:25:57.673 [2024-12-11 15:02:40.343775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.673 [2024-12-11 15:02:40.343801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.673 qpair failed and we were unable to recover it. 00:25:57.673 [2024-12-11 15:02:40.343892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.673 [2024-12-11 15:02:40.343936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.673 qpair failed and we were unable to recover it. 00:25:57.673 [2024-12-11 15:02:40.344093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.673 [2024-12-11 15:02:40.344128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.673 qpair failed and we were unable to recover it. 00:25:57.673 [2024-12-11 15:02:40.344247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.673 [2024-12-11 15:02:40.344290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.673 qpair failed and we were unable to recover it. 00:25:57.673 [2024-12-11 15:02:40.344447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.673 [2024-12-11 15:02:40.344483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.673 qpair failed and we were unable to recover it. 00:25:57.673 [2024-12-11 15:02:40.344642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.673 [2024-12-11 15:02:40.344668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.673 qpair failed and we were unable to recover it. 00:25:57.673 [2024-12-11 15:02:40.344779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.673 [2024-12-11 15:02:40.344805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.673 qpair failed and we were unable to recover it. 00:25:57.673 [2024-12-11 15:02:40.344896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.673 [2024-12-11 15:02:40.344922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.673 qpair failed and we were unable to recover it. 00:25:57.673 [2024-12-11 15:02:40.345045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.673 [2024-12-11 15:02:40.345081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.673 qpair failed and we were unable to recover it. 00:25:57.673 [2024-12-11 15:02:40.345207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.673 [2024-12-11 15:02:40.345251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.673 qpair failed and we were unable to recover it. 00:25:57.673 [2024-12-11 15:02:40.345369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.673 [2024-12-11 15:02:40.345405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.673 qpair failed and we were unable to recover it. 00:25:57.673 [2024-12-11 15:02:40.345512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.673 [2024-12-11 15:02:40.345558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.673 qpair failed and we were unable to recover it. 00:25:57.673 [2024-12-11 15:02:40.345697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.673 [2024-12-11 15:02:40.345723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.673 qpair failed and we were unable to recover it. 00:25:57.673 [2024-12-11 15:02:40.345805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.673 [2024-12-11 15:02:40.345831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.673 qpair failed and we were unable to recover it. 00:25:57.673 [2024-12-11 15:02:40.345980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.673 [2024-12-11 15:02:40.346016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.673 qpair failed and we were unable to recover it. 00:25:57.673 [2024-12-11 15:02:40.346224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.673 [2024-12-11 15:02:40.346259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.673 qpair failed and we were unable to recover it. 00:25:57.673 [2024-12-11 15:02:40.346407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.673 [2024-12-11 15:02:40.346442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.673 qpair failed and we were unable to recover it. 00:25:57.673 [2024-12-11 15:02:40.346574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.673 [2024-12-11 15:02:40.346616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.673 qpair failed and we were unable to recover it. 00:25:57.673 [2024-12-11 15:02:40.346731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.673 [2024-12-11 15:02:40.346757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.673 qpair failed and we were unable to recover it. 00:25:57.673 [2024-12-11 15:02:40.346850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.673 [2024-12-11 15:02:40.346876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.673 qpair failed and we were unable to recover it. 00:25:57.673 [2024-12-11 15:02:40.347024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.673 [2024-12-11 15:02:40.347060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.673 qpair failed and we were unable to recover it. 00:25:57.673 [2024-12-11 15:02:40.347255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.673 [2024-12-11 15:02:40.347290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.673 qpair failed and we were unable to recover it. 00:25:57.673 [2024-12-11 15:02:40.347405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.673 [2024-12-11 15:02:40.347446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.673 qpair failed and we were unable to recover it. 00:25:57.673 [2024-12-11 15:02:40.347570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.673 [2024-12-11 15:02:40.347616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.673 qpair failed and we were unable to recover it. 00:25:57.673 [2024-12-11 15:02:40.347701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.673 [2024-12-11 15:02:40.347727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.673 qpair failed and we were unable to recover it. 00:25:57.673 [2024-12-11 15:02:40.347876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.673 [2024-12-11 15:02:40.347902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.673 qpair failed and we were unable to recover it. 00:25:57.673 [2024-12-11 15:02:40.348066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.673 [2024-12-11 15:02:40.348101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.673 qpair failed and we were unable to recover it. 00:25:57.673 [2024-12-11 15:02:40.348229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.673 [2024-12-11 15:02:40.348274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.673 qpair failed and we were unable to recover it. 00:25:57.673 [2024-12-11 15:02:40.348420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.673 [2024-12-11 15:02:40.348455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.673 qpair failed and we were unable to recover it. 00:25:57.673 [2024-12-11 15:02:40.348567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.673 [2024-12-11 15:02:40.348623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.673 qpair failed and we were unable to recover it. 00:25:57.673 [2024-12-11 15:02:40.348720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.673 [2024-12-11 15:02:40.348747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.673 qpair failed and we were unable to recover it. 00:25:57.673 [2024-12-11 15:02:40.348859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.673 [2024-12-11 15:02:40.348884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.673 qpair failed and we were unable to recover it. 00:25:57.673 [2024-12-11 15:02:40.348996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.673 [2024-12-11 15:02:40.349031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.673 qpair failed and we were unable to recover it. 00:25:57.673 [2024-12-11 15:02:40.349182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.673 [2024-12-11 15:02:40.349218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.673 qpair failed and we were unable to recover it. 00:25:57.673 [2024-12-11 15:02:40.349362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.673 [2024-12-11 15:02:40.349398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.673 qpair failed and we were unable to recover it. 00:25:57.673 [2024-12-11 15:02:40.349530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.673 [2024-12-11 15:02:40.349589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.673 qpair failed and we were unable to recover it. 00:25:57.673 [2024-12-11 15:02:40.349710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.673 [2024-12-11 15:02:40.349736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.673 qpair failed and we were unable to recover it. 00:25:57.673 [2024-12-11 15:02:40.349823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.673 [2024-12-11 15:02:40.349849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.673 qpair failed and we were unable to recover it. 00:25:57.673 [2024-12-11 15:02:40.350021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.673 [2024-12-11 15:02:40.350056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.673 qpair failed and we were unable to recover it. 00:25:57.673 [2024-12-11 15:02:40.350169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.673 [2024-12-11 15:02:40.350218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.673 qpair failed and we were unable to recover it. 00:25:57.673 [2024-12-11 15:02:40.350362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.674 [2024-12-11 15:02:40.350400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.674 qpair failed and we were unable to recover it. 00:25:57.674 [2024-12-11 15:02:40.350561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.674 [2024-12-11 15:02:40.350608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.674 qpair failed and we were unable to recover it. 00:25:57.674 [2024-12-11 15:02:40.350749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.674 [2024-12-11 15:02:40.350775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.674 qpair failed and we were unable to recover it. 00:25:57.674 [2024-12-11 15:02:40.350937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.674 [2024-12-11 15:02:40.350973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.674 qpair failed and we were unable to recover it. 00:25:57.674 [2024-12-11 15:02:40.351119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.674 [2024-12-11 15:02:40.351153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.674 qpair failed and we were unable to recover it. 00:25:57.674 [2024-12-11 15:02:40.351324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.674 [2024-12-11 15:02:40.351360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.674 qpair failed and we were unable to recover it. 00:25:57.674 [2024-12-11 15:02:40.351471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.674 [2024-12-11 15:02:40.351508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.674 qpair failed and we were unable to recover it. 00:25:57.674 [2024-12-11 15:02:40.351641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.674 [2024-12-11 15:02:40.351667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.674 qpair failed and we were unable to recover it. 00:25:57.674 [2024-12-11 15:02:40.351751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.674 [2024-12-11 15:02:40.351777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.674 qpair failed and we were unable to recover it. 00:25:57.674 [2024-12-11 15:02:40.351895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.674 [2024-12-11 15:02:40.351922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.674 qpair failed and we were unable to recover it. 00:25:57.674 [2024-12-11 15:02:40.352069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.674 [2024-12-11 15:02:40.352105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.674 qpair failed and we were unable to recover it. 00:25:57.674 [2024-12-11 15:02:40.352233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.674 [2024-12-11 15:02:40.352277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.674 qpair failed and we were unable to recover it. 00:25:57.674 [2024-12-11 15:02:40.352456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.674 [2024-12-11 15:02:40.352492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.674 qpair failed and we were unable to recover it. 00:25:57.674 [2024-12-11 15:02:40.352645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.674 [2024-12-11 15:02:40.352673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.674 qpair failed and we were unable to recover it. 00:25:57.674 [2024-12-11 15:02:40.352751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.674 [2024-12-11 15:02:40.352778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.674 qpair failed and we were unable to recover it. 00:25:57.674 [2024-12-11 15:02:40.352899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.674 [2024-12-11 15:02:40.352925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.674 qpair failed and we were unable to recover it. 00:25:57.674 [2024-12-11 15:02:40.353053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.674 [2024-12-11 15:02:40.353090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.674 qpair failed and we were unable to recover it. 00:25:57.674 [2024-12-11 15:02:40.353296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.674 [2024-12-11 15:02:40.353331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.674 qpair failed and we were unable to recover it. 00:25:57.674 [2024-12-11 15:02:40.353479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.674 [2024-12-11 15:02:40.353514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.674 qpair failed and we were unable to recover it. 00:25:57.674 [2024-12-11 15:02:40.353648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.674 [2024-12-11 15:02:40.353675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.674 qpair failed and we were unable to recover it. 00:25:57.674 [2024-12-11 15:02:40.353814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.674 [2024-12-11 15:02:40.353840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.674 qpair failed and we were unable to recover it. 00:25:57.674 [2024-12-11 15:02:40.354001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.674 [2024-12-11 15:02:40.354036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.674 qpair failed and we were unable to recover it. 00:25:57.674 [2024-12-11 15:02:40.354157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.674 [2024-12-11 15:02:40.354200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.674 qpair failed and we were unable to recover it. 00:25:57.674 [2024-12-11 15:02:40.354367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.674 [2024-12-11 15:02:40.354403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.674 qpair failed and we were unable to recover it. 00:25:57.674 [2024-12-11 15:02:40.354560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.674 [2024-12-11 15:02:40.354607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.674 qpair failed and we were unable to recover it. 00:25:57.674 [2024-12-11 15:02:40.354689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.674 [2024-12-11 15:02:40.354715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.674 qpair failed and we were unable to recover it. 00:25:57.674 [2024-12-11 15:02:40.354805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.674 [2024-12-11 15:02:40.354831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.674 qpair failed and we were unable to recover it. 00:25:57.674 [2024-12-11 15:02:40.354938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.674 [2024-12-11 15:02:40.354964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.674 qpair failed and we were unable to recover it. 00:25:57.674 [2024-12-11 15:02:40.355153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.674 [2024-12-11 15:02:40.355249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.674 qpair failed and we were unable to recover it. 00:25:57.674 [2024-12-11 15:02:40.355371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.674 [2024-12-11 15:02:40.355397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.674 qpair failed and we were unable to recover it. 00:25:57.674 [2024-12-11 15:02:40.355599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.674 [2024-12-11 15:02:40.355624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.674 qpair failed and we were unable to recover it. 00:25:57.674 [2024-12-11 15:02:40.355740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.674 [2024-12-11 15:02:40.355766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.674 qpair failed and we were unable to recover it. 00:25:57.674 [2024-12-11 15:02:40.355852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.674 [2024-12-11 15:02:40.355878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.674 qpair failed and we were unable to recover it. 00:25:57.674 [2024-12-11 15:02:40.355962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.674 [2024-12-11 15:02:40.355989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.674 qpair failed and we were unable to recover it. 00:25:57.674 [2024-12-11 15:02:40.356124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.674 [2024-12-11 15:02:40.356161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.674 qpair failed and we were unable to recover it. 00:25:57.674 [2024-12-11 15:02:40.356315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.674 [2024-12-11 15:02:40.356351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.674 qpair failed and we were unable to recover it. 00:25:57.674 [2024-12-11 15:02:40.356503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.674 [2024-12-11 15:02:40.356538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.674 qpair failed and we were unable to recover it. 00:25:57.674 [2024-12-11 15:02:40.356686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.674 [2024-12-11 15:02:40.356712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.674 qpair failed and we were unable to recover it. 00:25:57.674 [2024-12-11 15:02:40.356857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.674 [2024-12-11 15:02:40.356883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.674 qpair failed and we were unable to recover it. 00:25:57.674 [2024-12-11 15:02:40.356969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.674 [2024-12-11 15:02:40.356994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.674 qpair failed and we were unable to recover it. 00:25:57.674 [2024-12-11 15:02:40.357127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.674 [2024-12-11 15:02:40.357162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.674 qpair failed and we were unable to recover it. 00:25:57.674 [2024-12-11 15:02:40.357326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.674 [2024-12-11 15:02:40.357360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.674 qpair failed and we were unable to recover it. 00:25:57.674 [2024-12-11 15:02:40.357507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.674 [2024-12-11 15:02:40.357542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.674 qpair failed and we were unable to recover it. 00:25:57.674 [2024-12-11 15:02:40.357718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.674 [2024-12-11 15:02:40.357745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.674 qpair failed and we were unable to recover it. 00:25:57.674 [2024-12-11 15:02:40.357852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.674 [2024-12-11 15:02:40.357877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.674 qpair failed and we were unable to recover it. 00:25:57.674 [2024-12-11 15:02:40.358017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.674 [2024-12-11 15:02:40.358086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.674 qpair failed and we were unable to recover it. 00:25:57.674 [2024-12-11 15:02:40.358232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.674 [2024-12-11 15:02:40.358296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.674 qpair failed and we were unable to recover it. 00:25:57.674 [2024-12-11 15:02:40.358469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.674 [2024-12-11 15:02:40.358495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.674 qpair failed and we were unable to recover it. 00:25:57.674 [2024-12-11 15:02:40.358613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.675 [2024-12-11 15:02:40.358640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.675 qpair failed and we were unable to recover it. 00:25:57.675 [2024-12-11 15:02:40.358752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.675 [2024-12-11 15:02:40.358791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.675 qpair failed and we were unable to recover it. 00:25:57.675 [2024-12-11 15:02:40.358940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.675 [2024-12-11 15:02:40.358967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.675 qpair failed and we were unable to recover it. 00:25:57.675 [2024-12-11 15:02:40.359119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.675 [2024-12-11 15:02:40.359147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.675 qpair failed and we were unable to recover it. 00:25:57.675 [2024-12-11 15:02:40.359239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.675 [2024-12-11 15:02:40.359265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.675 qpair failed and we were unable to recover it. 00:25:57.675 [2024-12-11 15:02:40.359407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.675 [2024-12-11 15:02:40.359482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.675 qpair failed and we were unable to recover it. 00:25:57.675 [2024-12-11 15:02:40.359679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.675 [2024-12-11 15:02:40.359706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.675 qpair failed and we were unable to recover it. 00:25:57.675 [2024-12-11 15:02:40.359847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.675 [2024-12-11 15:02:40.359873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.675 qpair failed and we were unable to recover it. 00:25:57.675 [2024-12-11 15:02:40.359962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.675 [2024-12-11 15:02:40.359990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.675 qpair failed and we were unable to recover it. 00:25:57.675 [2024-12-11 15:02:40.360106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.675 [2024-12-11 15:02:40.360133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.675 qpair failed and we were unable to recover it. 00:25:57.675 [2024-12-11 15:02:40.360232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.675 [2024-12-11 15:02:40.360260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.675 qpair failed and we were unable to recover it. 00:25:57.675 [2024-12-11 15:02:40.360391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.675 [2024-12-11 15:02:40.360453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.675 qpair failed and we were unable to recover it. 00:25:57.675 [2024-12-11 15:02:40.360573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.675 [2024-12-11 15:02:40.360601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.675 qpair failed and we were unable to recover it. 00:25:57.675 [2024-12-11 15:02:40.360716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.675 [2024-12-11 15:02:40.360741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.675 qpair failed and we were unable to recover it. 00:25:57.675 [2024-12-11 15:02:40.360822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.675 [2024-12-11 15:02:40.360856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.675 qpair failed and we were unable to recover it. 00:25:57.675 [2024-12-11 15:02:40.360963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.675 [2024-12-11 15:02:40.361012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.675 qpair failed and we were unable to recover it. 00:25:57.675 [2024-12-11 15:02:40.361135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.675 [2024-12-11 15:02:40.361168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.675 qpair failed and we were unable to recover it. 00:25:57.675 [2024-12-11 15:02:40.361273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.675 [2024-12-11 15:02:40.361300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.675 qpair failed and we were unable to recover it. 00:25:57.675 [2024-12-11 15:02:40.361382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.675 [2024-12-11 15:02:40.361408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.675 qpair failed and we were unable to recover it. 00:25:57.675 [2024-12-11 15:02:40.361529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.675 [2024-12-11 15:02:40.361564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.675 qpair failed and we were unable to recover it. 00:25:57.675 [2024-12-11 15:02:40.361689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.675 [2024-12-11 15:02:40.361717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.675 qpair failed and we were unable to recover it. 00:25:57.675 [2024-12-11 15:02:40.361828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.675 [2024-12-11 15:02:40.361854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.675 qpair failed and we were unable to recover it. 00:25:57.675 [2024-12-11 15:02:40.362023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.675 [2024-12-11 15:02:40.362058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.675 qpair failed and we were unable to recover it. 00:25:57.675 [2024-12-11 15:02:40.362231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.675 [2024-12-11 15:02:40.362267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.675 qpair failed and we were unable to recover it. 00:25:57.675 [2024-12-11 15:02:40.362390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.675 [2024-12-11 15:02:40.362458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.675 qpair failed and we were unable to recover it. 00:25:57.675 [2024-12-11 15:02:40.362629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.675 [2024-12-11 15:02:40.362656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.675 qpair failed and we were unable to recover it. 00:25:57.675 [2024-12-11 15:02:40.362794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.675 [2024-12-11 15:02:40.362820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.675 qpair failed and we were unable to recover it. 00:25:57.675 [2024-12-11 15:02:40.363026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.675 [2024-12-11 15:02:40.363086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.675 qpair failed and we were unable to recover it. 00:25:57.675 [2024-12-11 15:02:40.363238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.675 [2024-12-11 15:02:40.363281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.675 qpair failed and we were unable to recover it. 00:25:57.675 [2024-12-11 15:02:40.363481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.675 [2024-12-11 15:02:40.363562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.675 qpair failed and we were unable to recover it. 00:25:57.675 [2024-12-11 15:02:40.363726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.675 [2024-12-11 15:02:40.363753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.675 qpair failed and we were unable to recover it. 00:25:57.676 [2024-12-11 15:02:40.363866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.676 [2024-12-11 15:02:40.363892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.676 qpair failed and we were unable to recover it. 00:25:57.676 [2024-12-11 15:02:40.364004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.676 [2024-12-11 15:02:40.364031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.676 qpair failed and we were unable to recover it. 00:25:57.676 [2024-12-11 15:02:40.364121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.676 [2024-12-11 15:02:40.364148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.676 qpair failed and we were unable to recover it. 00:25:57.676 [2024-12-11 15:02:40.364283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.676 [2024-12-11 15:02:40.364349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.676 qpair failed and we were unable to recover it. 00:25:57.676 [2024-12-11 15:02:40.364565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.676 [2024-12-11 15:02:40.364604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.676 qpair failed and we were unable to recover it. 00:25:57.676 [2024-12-11 15:02:40.364731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.676 [2024-12-11 15:02:40.364757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.676 qpair failed and we were unable to recover it. 00:25:57.676 [2024-12-11 15:02:40.364897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.676 [2024-12-11 15:02:40.364936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.676 qpair failed and we were unable to recover it. 00:25:57.676 [2024-12-11 15:02:40.365088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.676 [2024-12-11 15:02:40.365136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.676 qpair failed and we were unable to recover it. 00:25:57.676 [2024-12-11 15:02:40.365283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.676 [2024-12-11 15:02:40.365329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.676 qpair failed and we were unable to recover it. 00:25:57.676 [2024-12-11 15:02:40.365441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.676 [2024-12-11 15:02:40.365466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.676 qpair failed and we were unable to recover it. 00:25:57.676 [2024-12-11 15:02:40.365557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.676 [2024-12-11 15:02:40.365588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.676 qpair failed and we were unable to recover it. 00:25:57.676 [2024-12-11 15:02:40.365670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.676 [2024-12-11 15:02:40.365695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.676 qpair failed and we were unable to recover it. 00:25:57.676 [2024-12-11 15:02:40.365810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.676 [2024-12-11 15:02:40.365858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.676 qpair failed and we were unable to recover it. 00:25:57.676 [2024-12-11 15:02:40.365983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.676 [2024-12-11 15:02:40.366029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.676 qpair failed and we were unable to recover it. 00:25:57.676 [2024-12-11 15:02:40.366146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.676 [2024-12-11 15:02:40.366172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.676 qpair failed and we were unable to recover it. 00:25:57.676 [2024-12-11 15:02:40.366263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.676 [2024-12-11 15:02:40.366288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.676 qpair failed and we were unable to recover it. 00:25:57.676 [2024-12-11 15:02:40.366379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.676 [2024-12-11 15:02:40.366404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.676 qpair failed and we were unable to recover it. 00:25:57.676 [2024-12-11 15:02:40.366515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.676 [2024-12-11 15:02:40.366541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.676 qpair failed and we were unable to recover it. 00:25:57.676 [2024-12-11 15:02:40.366667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.676 [2024-12-11 15:02:40.366693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.676 qpair failed and we were unable to recover it. 00:25:57.676 [2024-12-11 15:02:40.366778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.676 [2024-12-11 15:02:40.366803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.676 qpair failed and we were unable to recover it. 00:25:57.676 [2024-12-11 15:02:40.366896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.676 [2024-12-11 15:02:40.366922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.676 qpair failed and we were unable to recover it. 00:25:57.676 [2024-12-11 15:02:40.367057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.676 [2024-12-11 15:02:40.367082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.676 qpair failed and we were unable to recover it. 00:25:57.676 [2024-12-11 15:02:40.367234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.676 [2024-12-11 15:02:40.367273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.676 qpair failed and we were unable to recover it. 00:25:57.676 [2024-12-11 15:02:40.367419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.676 [2024-12-11 15:02:40.367447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.676 qpair failed and we were unable to recover it. 00:25:57.676 [2024-12-11 15:02:40.367579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.676 [2024-12-11 15:02:40.367618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.676 qpair failed and we were unable to recover it. 00:25:57.676 [2024-12-11 15:02:40.367769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.676 [2024-12-11 15:02:40.367800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.676 qpair failed and we were unable to recover it. 00:25:57.676 [2024-12-11 15:02:40.367919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.676 [2024-12-11 15:02:40.367946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.676 qpair failed and we were unable to recover it. 00:25:57.676 [2024-12-11 15:02:40.368028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.676 [2024-12-11 15:02:40.368054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.676 qpair failed and we were unable to recover it. 00:25:57.676 [2024-12-11 15:02:40.368206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.676 [2024-12-11 15:02:40.368255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.676 qpair failed and we were unable to recover it. 00:25:57.676 [2024-12-11 15:02:40.368371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.676 [2024-12-11 15:02:40.368396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.676 qpair failed and we were unable to recover it. 00:25:57.676 [2024-12-11 15:02:40.368477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.676 [2024-12-11 15:02:40.368503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.676 qpair failed and we were unable to recover it. 00:25:57.676 [2024-12-11 15:02:40.368610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.676 [2024-12-11 15:02:40.368658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.676 qpair failed and we were unable to recover it. 00:25:57.676 [2024-12-11 15:02:40.368838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.676 [2024-12-11 15:02:40.368882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.676 qpair failed and we were unable to recover it. 00:25:57.676 [2024-12-11 15:02:40.368999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.676 [2024-12-11 15:02:40.369046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.676 qpair failed and we were unable to recover it. 00:25:57.676 [2024-12-11 15:02:40.369192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.676 [2024-12-11 15:02:40.369239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.676 qpair failed and we were unable to recover it. 00:25:57.676 [2024-12-11 15:02:40.369385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.676 [2024-12-11 15:02:40.369410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.676 qpair failed and we were unable to recover it. 00:25:57.676 [2024-12-11 15:02:40.369499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.676 [2024-12-11 15:02:40.369525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.676 qpair failed and we were unable to recover it. 00:25:57.676 [2024-12-11 15:02:40.369653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.676 [2024-12-11 15:02:40.369687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.677 qpair failed and we were unable to recover it. 00:25:57.677 [2024-12-11 15:02:40.369776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.677 [2024-12-11 15:02:40.369803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.677 qpair failed and we were unable to recover it. 00:25:57.677 [2024-12-11 15:02:40.369914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.677 [2024-12-11 15:02:40.369940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.677 qpair failed and we were unable to recover it. 00:25:57.677 [2024-12-11 15:02:40.370051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.677 [2024-12-11 15:02:40.370087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.677 qpair failed and we were unable to recover it. 00:25:57.677 [2024-12-11 15:02:40.370226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.677 [2024-12-11 15:02:40.370262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.677 qpair failed and we were unable to recover it. 00:25:57.677 [2024-12-11 15:02:40.370408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.677 [2024-12-11 15:02:40.370444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.677 qpair failed and we were unable to recover it. 00:25:57.677 [2024-12-11 15:02:40.370589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.677 [2024-12-11 15:02:40.370616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.677 qpair failed and we were unable to recover it. 00:25:57.677 [2024-12-11 15:02:40.370713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.677 [2024-12-11 15:02:40.370739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.677 qpair failed and we were unable to recover it. 00:25:57.677 [2024-12-11 15:02:40.370833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.677 [2024-12-11 15:02:40.370859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.677 qpair failed and we were unable to recover it. 00:25:57.677 [2024-12-11 15:02:40.370979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.677 [2024-12-11 15:02:40.371016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.677 qpair failed and we were unable to recover it. 00:25:57.677 [2024-12-11 15:02:40.371167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.677 [2024-12-11 15:02:40.371203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.677 qpair failed and we were unable to recover it. 00:25:57.677 [2024-12-11 15:02:40.371323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.677 [2024-12-11 15:02:40.371360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.677 qpair failed and we were unable to recover it. 00:25:57.677 [2024-12-11 15:02:40.371510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.677 [2024-12-11 15:02:40.371536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.677 qpair failed and we were unable to recover it. 00:25:57.677 [2024-12-11 15:02:40.371635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.677 [2024-12-11 15:02:40.371663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.677 qpair failed and we were unable to recover it. 00:25:57.677 [2024-12-11 15:02:40.371780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.677 [2024-12-11 15:02:40.371808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.677 qpair failed and we were unable to recover it. 00:25:57.677 [2024-12-11 15:02:40.371930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.677 [2024-12-11 15:02:40.371966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.677 qpair failed and we were unable to recover it. 00:25:57.677 [2024-12-11 15:02:40.372155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.677 [2024-12-11 15:02:40.372193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.677 qpair failed and we were unable to recover it. 00:25:57.677 [2024-12-11 15:02:40.372306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.677 [2024-12-11 15:02:40.372344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.677 qpair failed and we were unable to recover it. 00:25:57.677 [2024-12-11 15:02:40.372497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.677 [2024-12-11 15:02:40.372525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.677 qpair failed and we were unable to recover it. 00:25:57.677 [2024-12-11 15:02:40.372687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.677 [2024-12-11 15:02:40.372714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.677 qpair failed and we were unable to recover it. 00:25:57.677 [2024-12-11 15:02:40.372823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.677 [2024-12-11 15:02:40.372850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.677 qpair failed and we were unable to recover it. 00:25:57.677 [2024-12-11 15:02:40.372982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.677 [2024-12-11 15:02:40.373033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.677 qpair failed and we were unable to recover it. 00:25:57.677 [2024-12-11 15:02:40.373160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.677 [2024-12-11 15:02:40.373209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.677 qpair failed and we were unable to recover it. 00:25:57.677 [2024-12-11 15:02:40.373354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.677 [2024-12-11 15:02:40.373404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.677 qpair failed and we were unable to recover it. 00:25:57.677 [2024-12-11 15:02:40.373488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.677 [2024-12-11 15:02:40.373513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.677 qpair failed and we were unable to recover it. 00:25:57.677 [2024-12-11 15:02:40.373639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.677 [2024-12-11 15:02:40.373668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.677 qpair failed and we were unable to recover it. 00:25:57.677 [2024-12-11 15:02:40.373820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.677 [2024-12-11 15:02:40.373846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.677 qpair failed and we were unable to recover it. 00:25:57.677 [2024-12-11 15:02:40.373957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.677 [2024-12-11 15:02:40.373984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.677 qpair failed and we were unable to recover it. 00:25:57.677 [2024-12-11 15:02:40.374102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.677 [2024-12-11 15:02:40.374127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.677 qpair failed and we were unable to recover it. 00:25:57.677 [2024-12-11 15:02:40.374219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.677 [2024-12-11 15:02:40.374258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.677 qpair failed and we were unable to recover it. 00:25:57.677 [2024-12-11 15:02:40.374349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.677 [2024-12-11 15:02:40.374377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.677 qpair failed and we were unable to recover it. 00:25:57.677 [2024-12-11 15:02:40.374469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.677 [2024-12-11 15:02:40.374496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.677 qpair failed and we were unable to recover it. 00:25:57.677 [2024-12-11 15:02:40.374585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.677 [2024-12-11 15:02:40.374612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.677 qpair failed and we were unable to recover it. 00:25:57.677 [2024-12-11 15:02:40.374758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.677 [2024-12-11 15:02:40.374785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.677 qpair failed and we were unable to recover it. 00:25:57.677 [2024-12-11 15:02:40.374869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.677 [2024-12-11 15:02:40.374895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.677 qpair failed and we were unable to recover it. 00:25:57.677 [2024-12-11 15:02:40.375048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.677 [2024-12-11 15:02:40.375084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.677 qpair failed and we were unable to recover it. 00:25:57.677 [2024-12-11 15:02:40.375196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.677 [2024-12-11 15:02:40.375233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.677 qpair failed and we were unable to recover it. 00:25:57.677 [2024-12-11 15:02:40.375386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.677 [2024-12-11 15:02:40.375422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.677 qpair failed and we were unable to recover it. 00:25:57.963 [2024-12-11 15:02:40.375538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.963 [2024-12-11 15:02:40.375572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.963 qpair failed and we were unable to recover it. 00:25:57.963 [2024-12-11 15:02:40.375700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.963 [2024-12-11 15:02:40.375726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.963 qpair failed and we were unable to recover it. 00:25:57.963 [2024-12-11 15:02:40.375802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.963 [2024-12-11 15:02:40.375827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.963 qpair failed and we were unable to recover it. 00:25:57.963 [2024-12-11 15:02:40.375954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.963 [2024-12-11 15:02:40.375990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.963 qpair failed and we were unable to recover it. 00:25:57.963 [2024-12-11 15:02:40.376133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.963 [2024-12-11 15:02:40.376183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.963 qpair failed and we were unable to recover it. 00:25:57.963 [2024-12-11 15:02:40.376340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.963 [2024-12-11 15:02:40.376376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.963 qpair failed and we were unable to recover it. 00:25:57.963 [2024-12-11 15:02:40.376550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.963 [2024-12-11 15:02:40.376578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.964 qpair failed and we were unable to recover it. 00:25:57.964 [2024-12-11 15:02:40.376722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.964 [2024-12-11 15:02:40.376750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.964 qpair failed and we were unable to recover it. 00:25:57.964 [2024-12-11 15:02:40.376841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.964 [2024-12-11 15:02:40.376867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.964 qpair failed and we were unable to recover it. 00:25:57.964 [2024-12-11 15:02:40.376946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.964 [2024-12-11 15:02:40.376972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.964 qpair failed and we were unable to recover it. 00:25:57.964 [2024-12-11 15:02:40.377086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.964 [2024-12-11 15:02:40.377121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.964 qpair failed and we were unable to recover it. 00:25:57.964 [2024-12-11 15:02:40.377268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.964 [2024-12-11 15:02:40.377302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.964 qpair failed and we were unable to recover it. 00:25:57.964 [2024-12-11 15:02:40.377417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.964 [2024-12-11 15:02:40.377451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.964 qpair failed and we were unable to recover it. 00:25:57.964 [2024-12-11 15:02:40.377566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.964 [2024-12-11 15:02:40.377593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.964 qpair failed and we were unable to recover it. 00:25:57.964 [2024-12-11 15:02:40.377712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.964 [2024-12-11 15:02:40.377759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.964 qpair failed and we were unable to recover it. 00:25:57.964 [2024-12-11 15:02:40.377937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.964 [2024-12-11 15:02:40.377975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.964 qpair failed and we were unable to recover it. 00:25:57.964 [2024-12-11 15:02:40.378098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.964 [2024-12-11 15:02:40.378133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.964 qpair failed and we were unable to recover it. 00:25:57.964 [2024-12-11 15:02:40.378250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.964 [2024-12-11 15:02:40.378293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.964 qpair failed and we were unable to recover it. 00:25:57.964 [2024-12-11 15:02:40.378414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.964 [2024-12-11 15:02:40.378440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.964 qpair failed and we were unable to recover it. 00:25:57.964 [2024-12-11 15:02:40.378560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.964 [2024-12-11 15:02:40.378587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.964 qpair failed and we were unable to recover it. 00:25:57.964 [2024-12-11 15:02:40.378671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.964 [2024-12-11 15:02:40.378699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.964 qpair failed and we were unable to recover it. 00:25:57.964 [2024-12-11 15:02:40.378781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.964 [2024-12-11 15:02:40.378808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.964 qpair failed and we were unable to recover it. 00:25:57.964 [2024-12-11 15:02:40.378900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.964 [2024-12-11 15:02:40.378926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.964 qpair failed and we were unable to recover it. 00:25:57.964 [2024-12-11 15:02:40.379010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.964 [2024-12-11 15:02:40.379037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.964 qpair failed and we were unable to recover it. 00:25:57.964 [2024-12-11 15:02:40.379142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.964 [2024-12-11 15:02:40.379177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.964 qpair failed and we were unable to recover it. 00:25:57.964 [2024-12-11 15:02:40.379347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.964 [2024-12-11 15:02:40.379381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.964 qpair failed and we were unable to recover it. 00:25:57.964 [2024-12-11 15:02:40.379478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.964 [2024-12-11 15:02:40.379512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.964 qpair failed and we were unable to recover it. 00:25:57.964 [2024-12-11 15:02:40.379640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.964 [2024-12-11 15:02:40.379668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.964 qpair failed and we were unable to recover it. 00:25:57.964 [2024-12-11 15:02:40.379754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.964 [2024-12-11 15:02:40.379780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.964 qpair failed and we were unable to recover it. 00:25:57.964 [2024-12-11 15:02:40.379887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.964 [2024-12-11 15:02:40.379928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.964 qpair failed and we were unable to recover it. 00:25:57.964 [2024-12-11 15:02:40.380046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.964 [2024-12-11 15:02:40.380095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.964 qpair failed and we were unable to recover it. 00:25:57.964 [2024-12-11 15:02:40.380214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.964 [2024-12-11 15:02:40.380252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.964 qpair failed and we were unable to recover it. 00:25:57.964 [2024-12-11 15:02:40.380406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.964 [2024-12-11 15:02:40.380440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.964 qpair failed and we were unable to recover it. 00:25:57.964 [2024-12-11 15:02:40.380564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.964 [2024-12-11 15:02:40.380591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.964 qpair failed and we were unable to recover it. 00:25:57.964 [2024-12-11 15:02:40.380735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.964 [2024-12-11 15:02:40.380764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.964 qpair failed and we were unable to recover it. 00:25:57.964 [2024-12-11 15:02:40.380853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.964 [2024-12-11 15:02:40.380879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.964 qpair failed and we were unable to recover it. 00:25:57.964 [2024-12-11 15:02:40.380982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.964 [2024-12-11 15:02:40.381016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.964 qpair failed and we were unable to recover it. 00:25:57.964 [2024-12-11 15:02:40.381167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.964 [2024-12-11 15:02:40.381204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.964 qpair failed and we were unable to recover it. 00:25:57.964 [2024-12-11 15:02:40.381323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.964 [2024-12-11 15:02:40.381357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.964 qpair failed and we were unable to recover it. 00:25:57.964 [2024-12-11 15:02:40.381509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.964 [2024-12-11 15:02:40.381536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.964 qpair failed and we were unable to recover it. 00:25:57.964 [2024-12-11 15:02:40.381633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.964 [2024-12-11 15:02:40.381659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.964 qpair failed and we were unable to recover it. 00:25:57.964 [2024-12-11 15:02:40.381776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.964 [2024-12-11 15:02:40.381802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.964 qpair failed and we were unable to recover it. 00:25:57.964 [2024-12-11 15:02:40.381922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.964 [2024-12-11 15:02:40.381948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.964 qpair failed and we were unable to recover it. 00:25:57.964 [2024-12-11 15:02:40.382064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.964 [2024-12-11 15:02:40.382090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.964 qpair failed and we were unable to recover it. 00:25:57.964 [2024-12-11 15:02:40.382205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.964 [2024-12-11 15:02:40.382231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.964 qpair failed and we were unable to recover it. 00:25:57.964 [2024-12-11 15:02:40.382370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.965 [2024-12-11 15:02:40.382403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.965 qpair failed and we were unable to recover it. 00:25:57.965 [2024-12-11 15:02:40.382583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.965 [2024-12-11 15:02:40.382635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.965 qpair failed and we were unable to recover it. 00:25:57.965 [2024-12-11 15:02:40.382727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.965 [2024-12-11 15:02:40.382757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.965 qpair failed and we were unable to recover it. 00:25:57.965 [2024-12-11 15:02:40.382879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.965 [2024-12-11 15:02:40.382914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.965 qpair failed and we were unable to recover it. 00:25:57.965 [2024-12-11 15:02:40.383007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.965 [2024-12-11 15:02:40.383033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.965 qpair failed and we were unable to recover it. 00:25:57.965 [2024-12-11 15:02:40.383118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.965 [2024-12-11 15:02:40.383144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.965 qpair failed and we were unable to recover it. 00:25:57.965 [2024-12-11 15:02:40.383247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.965 [2024-12-11 15:02:40.383280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.965 qpair failed and we were unable to recover it. 00:25:57.965 [2024-12-11 15:02:40.383423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.965 [2024-12-11 15:02:40.383457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.965 qpair failed and we were unable to recover it. 00:25:57.965 [2024-12-11 15:02:40.383633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.965 [2024-12-11 15:02:40.383660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.965 qpair failed and we were unable to recover it. 00:25:57.965 [2024-12-11 15:02:40.383775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.965 [2024-12-11 15:02:40.383801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.965 qpair failed and we were unable to recover it. 00:25:57.965 [2024-12-11 15:02:40.383888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.965 [2024-12-11 15:02:40.383915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.965 qpair failed and we were unable to recover it. 00:25:57.965 [2024-12-11 15:02:40.384037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.965 [2024-12-11 15:02:40.384065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.965 qpair failed and we were unable to recover it. 00:25:57.965 [2024-12-11 15:02:40.384199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.965 [2024-12-11 15:02:40.384232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.965 qpair failed and we were unable to recover it. 00:25:57.965 [2024-12-11 15:02:40.384422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.965 [2024-12-11 15:02:40.384474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.965 qpair failed and we were unable to recover it. 00:25:57.965 [2024-12-11 15:02:40.384665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.965 [2024-12-11 15:02:40.384695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.965 qpair failed and we were unable to recover it. 00:25:57.965 [2024-12-11 15:02:40.384790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.965 [2024-12-11 15:02:40.384819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.965 qpair failed and we were unable to recover it. 00:25:57.965 [2024-12-11 15:02:40.384903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.965 [2024-12-11 15:02:40.384930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.965 qpair failed and we were unable to recover it. 00:25:57.965 [2024-12-11 15:02:40.385081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.965 [2024-12-11 15:02:40.385115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.965 qpair failed and we were unable to recover it. 00:25:57.965 [2024-12-11 15:02:40.385236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.965 [2024-12-11 15:02:40.385278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.965 qpair failed and we were unable to recover it. 00:25:57.965 [2024-12-11 15:02:40.385428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.965 [2024-12-11 15:02:40.385462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.965 qpair failed and we were unable to recover it. 00:25:57.965 [2024-12-11 15:02:40.385609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.965 [2024-12-11 15:02:40.385636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.965 qpair failed and we were unable to recover it. 00:25:57.965 [2024-12-11 15:02:40.385779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.965 [2024-12-11 15:02:40.385805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.965 qpair failed and we were unable to recover it. 00:25:57.965 [2024-12-11 15:02:40.385891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.965 [2024-12-11 15:02:40.385917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.965 qpair failed and we were unable to recover it. 00:25:57.965 [2024-12-11 15:02:40.386077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.965 [2024-12-11 15:02:40.386111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.965 qpair failed and we were unable to recover it. 00:25:57.965 [2024-12-11 15:02:40.386269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.965 [2024-12-11 15:02:40.386311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.965 qpair failed and we were unable to recover it. 00:25:57.965 [2024-12-11 15:02:40.386423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.965 [2024-12-11 15:02:40.386479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.965 qpair failed and we were unable to recover it. 00:25:57.965 [2024-12-11 15:02:40.386681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.965 [2024-12-11 15:02:40.386708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.965 qpair failed and we were unable to recover it. 00:25:57.965 [2024-12-11 15:02:40.386808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.965 [2024-12-11 15:02:40.386834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.965 qpair failed and we were unable to recover it. 00:25:57.965 [2024-12-11 15:02:40.386916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.965 [2024-12-11 15:02:40.386943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.965 qpair failed and we were unable to recover it. 00:25:57.965 [2024-12-11 15:02:40.387046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.965 [2024-12-11 15:02:40.387081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.965 qpair failed and we were unable to recover it. 00:25:57.965 [2024-12-11 15:02:40.387227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.965 [2024-12-11 15:02:40.387276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.965 qpair failed and we were unable to recover it. 00:25:57.965 [2024-12-11 15:02:40.387424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.965 [2024-12-11 15:02:40.387457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.965 qpair failed and we were unable to recover it. 00:25:57.965 [2024-12-11 15:02:40.387642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.965 [2024-12-11 15:02:40.387669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.965 qpair failed and we were unable to recover it. 00:25:57.965 [2024-12-11 15:02:40.387787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.965 [2024-12-11 15:02:40.387815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.965 qpair failed and we were unable to recover it. 00:25:57.965 [2024-12-11 15:02:40.387930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.965 [2024-12-11 15:02:40.387958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.965 qpair failed and we were unable to recover it. 00:25:57.965 [2024-12-11 15:02:40.388097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.965 [2024-12-11 15:02:40.388133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.965 qpair failed and we were unable to recover it. 00:25:57.965 [2024-12-11 15:02:40.388338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.965 [2024-12-11 15:02:40.388372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.965 qpair failed and we were unable to recover it. 00:25:57.965 [2024-12-11 15:02:40.388487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.965 [2024-12-11 15:02:40.388521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.965 qpair failed and we were unable to recover it. 00:25:57.965 [2024-12-11 15:02:40.388660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.965 [2024-12-11 15:02:40.388687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.965 qpair failed and we were unable to recover it. 00:25:57.965 [2024-12-11 15:02:40.388799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.966 [2024-12-11 15:02:40.388825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.966 qpair failed and we were unable to recover it. 00:25:57.966 [2024-12-11 15:02:40.388919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.966 [2024-12-11 15:02:40.388946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.966 qpair failed and we were unable to recover it. 00:25:57.966 [2024-12-11 15:02:40.389055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.966 [2024-12-11 15:02:40.389089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.966 qpair failed and we were unable to recover it. 00:25:57.966 [2024-12-11 15:02:40.389236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.966 [2024-12-11 15:02:40.389262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.966 qpair failed and we were unable to recover it. 00:25:57.966 [2024-12-11 15:02:40.389376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.966 [2024-12-11 15:02:40.389402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.966 qpair failed and we were unable to recover it. 00:25:57.966 [2024-12-11 15:02:40.389528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.966 [2024-12-11 15:02:40.389573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.966 qpair failed and we were unable to recover it. 00:25:57.966 [2024-12-11 15:02:40.389736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.966 [2024-12-11 15:02:40.389770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.966 qpair failed and we were unable to recover it. 00:25:57.966 [2024-12-11 15:02:40.389916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.966 [2024-12-11 15:02:40.389955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.966 qpair failed and we were unable to recover it. 00:25:57.966 [2024-12-11 15:02:40.390066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.966 [2024-12-11 15:02:40.390100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.966 qpair failed and we were unable to recover it. 00:25:57.966 [2024-12-11 15:02:40.390268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.966 [2024-12-11 15:02:40.390312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.966 qpair failed and we were unable to recover it. 00:25:57.966 [2024-12-11 15:02:40.390523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.966 [2024-12-11 15:02:40.390579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.966 qpair failed and we were unable to recover it. 00:25:57.966 [2024-12-11 15:02:40.390752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.966 [2024-12-11 15:02:40.390788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.966 qpair failed and we were unable to recover it. 00:25:57.966 [2024-12-11 15:02:40.390945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.966 [2024-12-11 15:02:40.390981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.966 qpair failed and we were unable to recover it. 00:25:57.966 [2024-12-11 15:02:40.391148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.966 [2024-12-11 15:02:40.391191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.966 qpair failed and we were unable to recover it. 00:25:57.966 [2024-12-11 15:02:40.391376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.966 [2024-12-11 15:02:40.391421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.966 qpair failed and we were unable to recover it. 00:25:57.966 [2024-12-11 15:02:40.391637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.966 [2024-12-11 15:02:40.391682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.966 qpair failed and we were unable to recover it. 00:25:57.966 [2024-12-11 15:02:40.391815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.966 [2024-12-11 15:02:40.391869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.966 qpair failed and we were unable to recover it. 00:25:57.966 [2024-12-11 15:02:40.392093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.966 [2024-12-11 15:02:40.392137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.966 qpair failed and we were unable to recover it. 00:25:57.966 [2024-12-11 15:02:40.392272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.966 [2024-12-11 15:02:40.392328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.966 qpair failed and we were unable to recover it. 00:25:57.966 [2024-12-11 15:02:40.392490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.966 [2024-12-11 15:02:40.392557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.966 qpair failed and we were unable to recover it. 00:25:57.966 [2024-12-11 15:02:40.392704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.966 [2024-12-11 15:02:40.392760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.966 qpair failed and we were unable to recover it. 00:25:57.966 [2024-12-11 15:02:40.392977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.966 [2024-12-11 15:02:40.393037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.966 qpair failed and we were unable to recover it. 00:25:57.966 [2024-12-11 15:02:40.393245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.966 [2024-12-11 15:02:40.393305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.966 qpair failed and we were unable to recover it. 00:25:57.966 [2024-12-11 15:02:40.393468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.966 [2024-12-11 15:02:40.393567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.966 qpair failed and we were unable to recover it. 00:25:57.966 [2024-12-11 15:02:40.393746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.966 [2024-12-11 15:02:40.393787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.966 qpair failed and we were unable to recover it. 00:25:57.966 [2024-12-11 15:02:40.393927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.966 [2024-12-11 15:02:40.394005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.966 qpair failed and we were unable to recover it. 00:25:57.966 [2024-12-11 15:02:40.394180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.966 [2024-12-11 15:02:40.394214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.966 qpair failed and we were unable to recover it. 00:25:57.966 [2024-12-11 15:02:40.394334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.966 [2024-12-11 15:02:40.394368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.966 qpair failed and we were unable to recover it. 00:25:57.966 [2024-12-11 15:02:40.394477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.966 [2024-12-11 15:02:40.394511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.966 qpair failed and we were unable to recover it. 00:25:57.966 [2024-12-11 15:02:40.394640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.966 [2024-12-11 15:02:40.394698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.966 qpair failed and we were unable to recover it. 00:25:57.966 [2024-12-11 15:02:40.394908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.966 [2024-12-11 15:02:40.394969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.966 qpair failed and we were unable to recover it. 00:25:57.966 [2024-12-11 15:02:40.395162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.966 [2024-12-11 15:02:40.395224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.966 qpair failed and we were unable to recover it. 00:25:57.966 [2024-12-11 15:02:40.395404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.966 [2024-12-11 15:02:40.395447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.966 qpair failed and we were unable to recover it. 00:25:57.966 [2024-12-11 15:02:40.395584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.966 [2024-12-11 15:02:40.395649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.966 qpair failed and we were unable to recover it. 00:25:57.966 [2024-12-11 15:02:40.395853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.966 [2024-12-11 15:02:40.395897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.966 qpair failed and we were unable to recover it. 00:25:57.966 [2024-12-11 15:02:40.396038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.966 [2024-12-11 15:02:40.396072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.966 qpair failed and we were unable to recover it. 00:25:57.966 [2024-12-11 15:02:40.396221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.966 [2024-12-11 15:02:40.396281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.966 qpair failed and we were unable to recover it. 00:25:57.966 [2024-12-11 15:02:40.396414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.966 [2024-12-11 15:02:40.396448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.966 qpair failed and we were unable to recover it. 00:25:57.966 [2024-12-11 15:02:40.396656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.966 [2024-12-11 15:02:40.396717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.966 qpair failed and we were unable to recover it. 00:25:57.966 [2024-12-11 15:02:40.396951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.967 [2024-12-11 15:02:40.397011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.967 qpair failed and we were unable to recover it. 00:25:57.967 [2024-12-11 15:02:40.397220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.967 [2024-12-11 15:02:40.397281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.967 qpair failed and we were unable to recover it. 00:25:57.967 [2024-12-11 15:02:40.397468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.967 [2024-12-11 15:02:40.397511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.967 qpair failed and we were unable to recover it. 00:25:57.967 [2024-12-11 15:02:40.397643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.967 [2024-12-11 15:02:40.397697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.967 qpair failed and we were unable to recover it. 00:25:57.967 [2024-12-11 15:02:40.397864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.967 [2024-12-11 15:02:40.397917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.967 qpair failed and we were unable to recover it. 00:25:57.967 [2024-12-11 15:02:40.398135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.967 [2024-12-11 15:02:40.398178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.967 qpair failed and we were unable to recover it. 00:25:57.967 [2024-12-11 15:02:40.398354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.967 [2024-12-11 15:02:40.398399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.967 qpair failed and we were unable to recover it. 00:25:57.967 [2024-12-11 15:02:40.398552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.967 [2024-12-11 15:02:40.398593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.967 qpair failed and we were unable to recover it. 00:25:57.967 [2024-12-11 15:02:40.398711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.967 [2024-12-11 15:02:40.398745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.967 qpair failed and we were unable to recover it. 00:25:57.967 [2024-12-11 15:02:40.398855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.967 [2024-12-11 15:02:40.398890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.967 qpair failed and we were unable to recover it. 00:25:57.967 [2024-12-11 15:02:40.399005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.967 [2024-12-11 15:02:40.399038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.967 qpair failed and we were unable to recover it. 00:25:57.967 [2024-12-11 15:02:40.399183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.967 [2024-12-11 15:02:40.399216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.967 qpair failed and we were unable to recover it. 00:25:57.967 [2024-12-11 15:02:40.399356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.967 [2024-12-11 15:02:40.399391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.967 qpair failed and we were unable to recover it. 00:25:57.967 [2024-12-11 15:02:40.399541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.967 [2024-12-11 15:02:40.399594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.967 qpair failed and we were unable to recover it. 00:25:57.967 [2024-12-11 15:02:40.399715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.967 [2024-12-11 15:02:40.399749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.967 qpair failed and we were unable to recover it. 00:25:57.967 [2024-12-11 15:02:40.399900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.967 [2024-12-11 15:02:40.399935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.967 qpair failed and we were unable to recover it. 00:25:57.967 [2024-12-11 15:02:40.400077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.967 [2024-12-11 15:02:40.400110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.967 qpair failed and we were unable to recover it. 00:25:57.967 [2024-12-11 15:02:40.400259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.967 [2024-12-11 15:02:40.400293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.967 qpair failed and we were unable to recover it. 00:25:57.967 [2024-12-11 15:02:40.400439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.967 [2024-12-11 15:02:40.400472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.967 qpair failed and we were unable to recover it. 00:25:57.967 [2024-12-11 15:02:40.400632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.967 [2024-12-11 15:02:40.400668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.967 qpair failed and we were unable to recover it. 00:25:57.967 [2024-12-11 15:02:40.400842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.967 [2024-12-11 15:02:40.400877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.967 qpair failed and we were unable to recover it. 00:25:57.967 [2024-12-11 15:02:40.400999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.967 [2024-12-11 15:02:40.401037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.967 qpair failed and we were unable to recover it. 00:25:57.967 [2024-12-11 15:02:40.401214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.967 [2024-12-11 15:02:40.401249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.967 qpair failed and we were unable to recover it. 00:25:57.967 [2024-12-11 15:02:40.401370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.967 [2024-12-11 15:02:40.401405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.967 qpair failed and we were unable to recover it. 00:25:57.967 [2024-12-11 15:02:40.401592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.967 [2024-12-11 15:02:40.401626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.967 qpair failed and we were unable to recover it. 00:25:57.967 [2024-12-11 15:02:40.401746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.967 [2024-12-11 15:02:40.401779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.967 qpair failed and we were unable to recover it. 00:25:57.967 [2024-12-11 15:02:40.401887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.967 [2024-12-11 15:02:40.401927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.967 qpair failed and we were unable to recover it. 00:25:57.967 [2024-12-11 15:02:40.402063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.967 [2024-12-11 15:02:40.402097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.967 qpair failed and we were unable to recover it. 00:25:57.967 [2024-12-11 15:02:40.402266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.967 [2024-12-11 15:02:40.402299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.967 qpair failed and we were unable to recover it. 00:25:57.967 [2024-12-11 15:02:40.402428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.967 [2024-12-11 15:02:40.402462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.967 qpair failed and we were unable to recover it. 00:25:57.967 [2024-12-11 15:02:40.402578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.967 [2024-12-11 15:02:40.402612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.967 qpair failed and we were unable to recover it. 00:25:57.967 [2024-12-11 15:02:40.402733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.967 [2024-12-11 15:02:40.402766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.967 qpair failed and we were unable to recover it. 00:25:57.967 [2024-12-11 15:02:40.402877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.967 [2024-12-11 15:02:40.402911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.967 qpair failed and we were unable to recover it. 00:25:57.967 [2024-12-11 15:02:40.403067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.967 [2024-12-11 15:02:40.403100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.967 qpair failed and we were unable to recover it. 00:25:57.967 [2024-12-11 15:02:40.403263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.967 [2024-12-11 15:02:40.403296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.967 qpair failed and we were unable to recover it. 00:25:57.967 [2024-12-11 15:02:40.403403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.967 [2024-12-11 15:02:40.403437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.967 qpair failed and we were unable to recover it. 00:25:57.967 [2024-12-11 15:02:40.403588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.967 [2024-12-11 15:02:40.403622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.967 qpair failed and we were unable to recover it. 00:25:57.967 [2024-12-11 15:02:40.403783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.967 [2024-12-11 15:02:40.403818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.967 qpair failed and we were unable to recover it. 00:25:57.967 [2024-12-11 15:02:40.403923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.967 [2024-12-11 15:02:40.403957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.967 qpair failed and we were unable to recover it. 00:25:57.967 [2024-12-11 15:02:40.404059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.968 [2024-12-11 15:02:40.404095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.968 qpair failed and we were unable to recover it. 00:25:57.968 [2024-12-11 15:02:40.404246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.968 [2024-12-11 15:02:40.404282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.968 qpair failed and we were unable to recover it. 00:25:57.968 [2024-12-11 15:02:40.404436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.968 [2024-12-11 15:02:40.404471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.968 qpair failed and we were unable to recover it. 00:25:57.968 [2024-12-11 15:02:40.404623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.968 [2024-12-11 15:02:40.404659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.968 qpair failed and we were unable to recover it. 00:25:57.968 [2024-12-11 15:02:40.404809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.968 [2024-12-11 15:02:40.404846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.968 qpair failed and we were unable to recover it. 00:25:57.968 [2024-12-11 15:02:40.405025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.968 [2024-12-11 15:02:40.405060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.968 qpair failed and we were unable to recover it. 00:25:57.968 [2024-12-11 15:02:40.405195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.968 [2024-12-11 15:02:40.405230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.968 qpair failed and we were unable to recover it. 00:25:57.968 [2024-12-11 15:02:40.405360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.968 [2024-12-11 15:02:40.405394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.968 qpair failed and we were unable to recover it. 00:25:57.968 [2024-12-11 15:02:40.405572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.968 [2024-12-11 15:02:40.405608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.968 qpair failed and we were unable to recover it. 00:25:57.968 [2024-12-11 15:02:40.405752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.968 [2024-12-11 15:02:40.405787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.968 qpair failed and we were unable to recover it. 00:25:57.968 [2024-12-11 15:02:40.406016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.968 [2024-12-11 15:02:40.406052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.968 qpair failed and we were unable to recover it. 00:25:57.968 [2024-12-11 15:02:40.406227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.968 [2024-12-11 15:02:40.406262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.968 qpair failed and we were unable to recover it. 00:25:57.968 [2024-12-11 15:02:40.406404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.968 [2024-12-11 15:02:40.406440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.968 qpair failed and we were unable to recover it. 00:25:57.968 [2024-12-11 15:02:40.406556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.968 [2024-12-11 15:02:40.406593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.968 qpair failed and we were unable to recover it. 00:25:57.968 [2024-12-11 15:02:40.406748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.968 [2024-12-11 15:02:40.406783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.968 qpair failed and we were unable to recover it. 00:25:57.968 [2024-12-11 15:02:40.406906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.968 [2024-12-11 15:02:40.406941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.968 qpair failed and we were unable to recover it. 00:25:57.968 [2024-12-11 15:02:40.407095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.968 [2024-12-11 15:02:40.407131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.968 qpair failed and we were unable to recover it. 00:25:57.968 [2024-12-11 15:02:40.407289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.968 [2024-12-11 15:02:40.407324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.968 qpair failed and we were unable to recover it. 00:25:57.968 [2024-12-11 15:02:40.407499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.968 [2024-12-11 15:02:40.407534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.968 qpair failed and we were unable to recover it. 00:25:57.968 [2024-12-11 15:02:40.407691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.968 [2024-12-11 15:02:40.407726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.968 qpair failed and we were unable to recover it. 00:25:57.968 [2024-12-11 15:02:40.407860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.968 [2024-12-11 15:02:40.407895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.968 qpair failed and we were unable to recover it. 00:25:57.968 [2024-12-11 15:02:40.408017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.968 [2024-12-11 15:02:40.408052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.968 qpair failed and we were unable to recover it. 00:25:57.968 [2024-12-11 15:02:40.408195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.968 [2024-12-11 15:02:40.408232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.968 qpair failed and we were unable to recover it. 00:25:57.968 [2024-12-11 15:02:40.408374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.968 [2024-12-11 15:02:40.408409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.968 qpair failed and we were unable to recover it. 00:25:57.968 [2024-12-11 15:02:40.408586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.968 [2024-12-11 15:02:40.408622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.968 qpair failed and we were unable to recover it. 00:25:57.968 [2024-12-11 15:02:40.408758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.968 [2024-12-11 15:02:40.408792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.968 qpair failed and we were unable to recover it. 00:25:57.968 [2024-12-11 15:02:40.408940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.968 [2024-12-11 15:02:40.408975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.968 qpair failed and we were unable to recover it. 00:25:57.968 [2024-12-11 15:02:40.409086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.968 [2024-12-11 15:02:40.409133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.968 qpair failed and we were unable to recover it. 00:25:57.968 [2024-12-11 15:02:40.409275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.968 [2024-12-11 15:02:40.409311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.968 qpair failed and we were unable to recover it. 00:25:57.968 [2024-12-11 15:02:40.409449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.968 [2024-12-11 15:02:40.409485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.968 qpair failed and we were unable to recover it. 00:25:57.968 [2024-12-11 15:02:40.409614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.968 [2024-12-11 15:02:40.409650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.968 qpair failed and we were unable to recover it. 00:25:57.968 [2024-12-11 15:02:40.409763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.968 [2024-12-11 15:02:40.409798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.968 qpair failed and we were unable to recover it. 00:25:57.968 [2024-12-11 15:02:40.409914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.968 [2024-12-11 15:02:40.409949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.968 qpair failed and we were unable to recover it. 00:25:57.969 [2024-12-11 15:02:40.410093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.969 [2024-12-11 15:02:40.410131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.969 qpair failed and we were unable to recover it. 00:25:57.969 [2024-12-11 15:02:40.410275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.969 [2024-12-11 15:02:40.410311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.969 qpair failed and we were unable to recover it. 00:25:57.969 [2024-12-11 15:02:40.410418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.969 [2024-12-11 15:02:40.410454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.969 qpair failed and we were unable to recover it. 00:25:57.969 [2024-12-11 15:02:40.410598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.969 [2024-12-11 15:02:40.410635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.969 qpair failed and we were unable to recover it. 00:25:57.969 [2024-12-11 15:02:40.410787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.969 [2024-12-11 15:02:40.410822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.969 qpair failed and we were unable to recover it. 00:25:57.969 [2024-12-11 15:02:40.410940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.969 [2024-12-11 15:02:40.410976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.969 qpair failed and we were unable to recover it. 00:25:57.969 [2024-12-11 15:02:40.411090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.969 [2024-12-11 15:02:40.411127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.969 qpair failed and we were unable to recover it. 00:25:57.969 [2024-12-11 15:02:40.411282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.969 [2024-12-11 15:02:40.411317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.969 qpair failed and we were unable to recover it. 00:25:57.969 [2024-12-11 15:02:40.411440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.969 [2024-12-11 15:02:40.411476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.969 qpair failed and we were unable to recover it. 00:25:57.969 [2024-12-11 15:02:40.411595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.969 [2024-12-11 15:02:40.411632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.969 qpair failed and we were unable to recover it. 00:25:57.969 [2024-12-11 15:02:40.411747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.969 [2024-12-11 15:02:40.411783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.969 qpair failed and we were unable to recover it. 00:25:57.969 [2024-12-11 15:02:40.411892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.969 [2024-12-11 15:02:40.411927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.969 qpair failed and we were unable to recover it. 00:25:57.969 [2024-12-11 15:02:40.412078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.969 [2024-12-11 15:02:40.412113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.969 qpair failed and we were unable to recover it. 00:25:57.969 [2024-12-11 15:02:40.412266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.969 [2024-12-11 15:02:40.412303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.969 qpair failed and we were unable to recover it. 00:25:57.969 [2024-12-11 15:02:40.412426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.969 [2024-12-11 15:02:40.412461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.969 qpair failed and we were unable to recover it. 00:25:57.969 [2024-12-11 15:02:40.412583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.969 [2024-12-11 15:02:40.412620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.969 qpair failed and we were unable to recover it. 00:25:57.969 [2024-12-11 15:02:40.412741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.969 [2024-12-11 15:02:40.412775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.969 qpair failed and we were unable to recover it. 00:25:57.969 [2024-12-11 15:02:40.412917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.969 [2024-12-11 15:02:40.412952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.969 qpair failed and we were unable to recover it. 00:25:57.969 [2024-12-11 15:02:40.413086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.969 [2024-12-11 15:02:40.413121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.969 qpair failed and we were unable to recover it. 00:25:57.969 [2024-12-11 15:02:40.413235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.969 [2024-12-11 15:02:40.413270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.969 qpair failed and we were unable to recover it. 00:25:57.969 [2024-12-11 15:02:40.413418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.969 [2024-12-11 15:02:40.413454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.969 qpair failed and we were unable to recover it. 00:25:57.969 [2024-12-11 15:02:40.413571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.969 [2024-12-11 15:02:40.413608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.969 qpair failed and we were unable to recover it. 00:25:57.969 [2024-12-11 15:02:40.413760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.969 [2024-12-11 15:02:40.413795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.969 qpair failed and we were unable to recover it. 00:25:57.969 [2024-12-11 15:02:40.413972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.969 [2024-12-11 15:02:40.414008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.969 qpair failed and we were unable to recover it. 00:25:57.969 [2024-12-11 15:02:40.414181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.969 [2024-12-11 15:02:40.414216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.969 qpair failed and we were unable to recover it. 00:25:57.969 [2024-12-11 15:02:40.414395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.969 [2024-12-11 15:02:40.414448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.969 qpair failed and we were unable to recover it. 00:25:57.969 [2024-12-11 15:02:40.414627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.969 [2024-12-11 15:02:40.414664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.969 qpair failed and we were unable to recover it. 00:25:57.969 [2024-12-11 15:02:40.414802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.969 [2024-12-11 15:02:40.414837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.969 qpair failed and we were unable to recover it. 00:25:57.969 [2024-12-11 15:02:40.414980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.969 [2024-12-11 15:02:40.415014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.969 qpair failed and we were unable to recover it. 00:25:57.969 [2024-12-11 15:02:40.415188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.969 [2024-12-11 15:02:40.415224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.969 qpair failed and we were unable to recover it. 00:25:57.969 [2024-12-11 15:02:40.415340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.969 [2024-12-11 15:02:40.415374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.969 qpair failed and we were unable to recover it. 00:25:57.969 [2024-12-11 15:02:40.415494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.969 [2024-12-11 15:02:40.415531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.969 qpair failed and we were unable to recover it. 00:25:57.969 [2024-12-11 15:02:40.415662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.969 [2024-12-11 15:02:40.415697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.969 qpair failed and we were unable to recover it. 00:25:57.969 [2024-12-11 15:02:40.415819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.969 [2024-12-11 15:02:40.415854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.969 qpair failed and we were unable to recover it. 00:25:57.969 [2024-12-11 15:02:40.416007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.969 [2024-12-11 15:02:40.416047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.969 qpair failed and we were unable to recover it. 00:25:57.969 [2024-12-11 15:02:40.416188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.969 [2024-12-11 15:02:40.416224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.969 qpair failed and we were unable to recover it. 00:25:57.969 [2024-12-11 15:02:40.416372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.969 [2024-12-11 15:02:40.416407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.969 qpair failed and we were unable to recover it. 00:25:57.969 [2024-12-11 15:02:40.416530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.969 [2024-12-11 15:02:40.416576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.969 qpair failed and we were unable to recover it. 00:25:57.969 [2024-12-11 15:02:40.416721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.969 [2024-12-11 15:02:40.416758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.969 qpair failed and we were unable to recover it. 00:25:57.970 [2024-12-11 15:02:40.416908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.970 [2024-12-11 15:02:40.416944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.970 qpair failed and we were unable to recover it. 00:25:57.970 [2024-12-11 15:02:40.417086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.970 [2024-12-11 15:02:40.417144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.970 qpair failed and we were unable to recover it. 00:25:57.970 [2024-12-11 15:02:40.417308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.970 [2024-12-11 15:02:40.417343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.970 qpair failed and we were unable to recover it. 00:25:57.970 [2024-12-11 15:02:40.417496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.970 [2024-12-11 15:02:40.417531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.970 qpair failed and we were unable to recover it. 00:25:57.970 [2024-12-11 15:02:40.417689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.970 [2024-12-11 15:02:40.417724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.970 qpair failed and we were unable to recover it. 00:25:57.970 [2024-12-11 15:02:40.417867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.970 [2024-12-11 15:02:40.417902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.970 qpair failed and we were unable to recover it. 00:25:57.970 [2024-12-11 15:02:40.418054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.970 [2024-12-11 15:02:40.418089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.970 qpair failed and we were unable to recover it. 00:25:57.970 [2024-12-11 15:02:40.418210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.970 [2024-12-11 15:02:40.418245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.970 qpair failed and we were unable to recover it. 00:25:57.970 [2024-12-11 15:02:40.418373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.970 [2024-12-11 15:02:40.418408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.970 qpair failed and we were unable to recover it. 00:25:57.970 [2024-12-11 15:02:40.418587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.970 [2024-12-11 15:02:40.418624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.970 qpair failed and we were unable to recover it. 00:25:57.970 [2024-12-11 15:02:40.418772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.970 [2024-12-11 15:02:40.418808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.970 qpair failed and we were unable to recover it. 00:25:57.970 [2024-12-11 15:02:40.418919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.970 [2024-12-11 15:02:40.418954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.970 qpair failed and we were unable to recover it. 00:25:57.970 [2024-12-11 15:02:40.419130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.970 [2024-12-11 15:02:40.419166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.970 qpair failed and we were unable to recover it. 00:25:57.970 [2024-12-11 15:02:40.419340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.970 [2024-12-11 15:02:40.419375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.970 qpair failed and we were unable to recover it. 00:25:57.970 [2024-12-11 15:02:40.419520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.970 [2024-12-11 15:02:40.419562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.970 qpair failed and we were unable to recover it. 00:25:57.970 [2024-12-11 15:02:40.419708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.970 [2024-12-11 15:02:40.419743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.970 qpair failed and we were unable to recover it. 00:25:57.970 [2024-12-11 15:02:40.419926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.970 [2024-12-11 15:02:40.419961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.970 qpair failed and we were unable to recover it. 00:25:57.970 [2024-12-11 15:02:40.420112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.970 [2024-12-11 15:02:40.420147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.970 qpair failed and we were unable to recover it. 00:25:57.970 [2024-12-11 15:02:40.420262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.970 [2024-12-11 15:02:40.420297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.970 qpair failed and we were unable to recover it. 00:25:57.970 [2024-12-11 15:02:40.420417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.970 [2024-12-11 15:02:40.420454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.970 qpair failed and we were unable to recover it. 00:25:57.970 [2024-12-11 15:02:40.420611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.970 [2024-12-11 15:02:40.420652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.970 qpair failed and we were unable to recover it. 00:25:57.970 [2024-12-11 15:02:40.420777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.970 [2024-12-11 15:02:40.420812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.970 qpair failed and we were unable to recover it. 00:25:57.970 [2024-12-11 15:02:40.420959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.970 [2024-12-11 15:02:40.421012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.970 qpair failed and we were unable to recover it. 00:25:57.970 [2024-12-11 15:02:40.421176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.970 [2024-12-11 15:02:40.421213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.970 qpair failed and we were unable to recover it. 00:25:57.970 [2024-12-11 15:02:40.421364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.970 [2024-12-11 15:02:40.421401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.970 qpair failed and we were unable to recover it. 00:25:57.970 [2024-12-11 15:02:40.421520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.970 [2024-12-11 15:02:40.421570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.970 qpair failed and we were unable to recover it. 00:25:57.970 [2024-12-11 15:02:40.421720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.970 [2024-12-11 15:02:40.421756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.970 qpair failed and we were unable to recover it. 00:25:57.970 [2024-12-11 15:02:40.421905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.970 [2024-12-11 15:02:40.421940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.970 qpair failed and we were unable to recover it. 00:25:57.970 [2024-12-11 15:02:40.422086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.970 [2024-12-11 15:02:40.422123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.970 qpair failed and we were unable to recover it. 00:25:57.970 [2024-12-11 15:02:40.422272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.970 [2024-12-11 15:02:40.422308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.970 qpair failed and we were unable to recover it. 00:25:57.970 [2024-12-11 15:02:40.422454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.970 [2024-12-11 15:02:40.422489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.970 qpair failed and we were unable to recover it. 00:25:57.970 [2024-12-11 15:02:40.422616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.970 [2024-12-11 15:02:40.422653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.970 qpair failed and we were unable to recover it. 00:25:57.970 [2024-12-11 15:02:40.422764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.970 [2024-12-11 15:02:40.422804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.970 qpair failed and we were unable to recover it. 00:25:57.970 [2024-12-11 15:02:40.422973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.970 [2024-12-11 15:02:40.423026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.970 qpair failed and we were unable to recover it. 00:25:57.970 [2024-12-11 15:02:40.423135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.970 [2024-12-11 15:02:40.423172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.970 qpair failed and we were unable to recover it. 00:25:57.970 [2024-12-11 15:02:40.423315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.970 [2024-12-11 15:02:40.423358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.970 qpair failed and we were unable to recover it. 00:25:57.970 [2024-12-11 15:02:40.423475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.970 [2024-12-11 15:02:40.423511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.970 qpair failed and we were unable to recover it. 00:25:57.970 [2024-12-11 15:02:40.423677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.970 [2024-12-11 15:02:40.423714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.970 qpair failed and we were unable to recover it. 00:25:57.970 [2024-12-11 15:02:40.423837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.970 [2024-12-11 15:02:40.423872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.970 qpair failed and we were unable to recover it. 00:25:57.971 [2024-12-11 15:02:40.424013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.971 [2024-12-11 15:02:40.424047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.971 qpair failed and we were unable to recover it. 00:25:57.971 [2024-12-11 15:02:40.424188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.971 [2024-12-11 15:02:40.424248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.971 qpair failed and we were unable to recover it. 00:25:57.971 [2024-12-11 15:02:40.424412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.971 [2024-12-11 15:02:40.424470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.971 qpair failed and we were unable to recover it. 00:25:57.971 [2024-12-11 15:02:40.424649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.971 [2024-12-11 15:02:40.424685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.971 qpair failed and we were unable to recover it. 00:25:57.971 [2024-12-11 15:02:40.424834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.971 [2024-12-11 15:02:40.424869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.971 qpair failed and we were unable to recover it. 00:25:57.971 [2024-12-11 15:02:40.425021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.971 [2024-12-11 15:02:40.425056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.971 qpair failed and we were unable to recover it. 00:25:57.971 [2024-12-11 15:02:40.425234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.971 [2024-12-11 15:02:40.425268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.971 qpair failed and we were unable to recover it. 00:25:57.971 [2024-12-11 15:02:40.425418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.971 [2024-12-11 15:02:40.425453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.971 qpair failed and we were unable to recover it. 00:25:57.971 [2024-12-11 15:02:40.425595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.971 [2024-12-11 15:02:40.425631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.971 qpair failed and we were unable to recover it. 00:25:57.971 [2024-12-11 15:02:40.425783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.971 [2024-12-11 15:02:40.425818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.971 qpair failed and we were unable to recover it. 00:25:57.971 [2024-12-11 15:02:40.425940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.971 [2024-12-11 15:02:40.425975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.971 qpair failed and we were unable to recover it. 00:25:57.971 [2024-12-11 15:02:40.426096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.971 [2024-12-11 15:02:40.426131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.971 qpair failed and we were unable to recover it. 00:25:57.971 [2024-12-11 15:02:40.426248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.971 [2024-12-11 15:02:40.426310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.971 qpair failed and we were unable to recover it. 00:25:57.971 [2024-12-11 15:02:40.426481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.971 [2024-12-11 15:02:40.426515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.971 qpair failed and we were unable to recover it. 00:25:57.971 [2024-12-11 15:02:40.426670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.971 [2024-12-11 15:02:40.426706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.971 qpair failed and we were unable to recover it. 00:25:57.971 [2024-12-11 15:02:40.426849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.971 [2024-12-11 15:02:40.426884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.971 qpair failed and we were unable to recover it. 00:25:57.971 [2024-12-11 15:02:40.427021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.971 [2024-12-11 15:02:40.427056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.971 qpair failed and we were unable to recover it. 00:25:57.971 [2024-12-11 15:02:40.427210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.971 [2024-12-11 15:02:40.427245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.971 qpair failed and we were unable to recover it. 00:25:57.971 [2024-12-11 15:02:40.427358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.971 [2024-12-11 15:02:40.427392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.971 qpair failed and we were unable to recover it. 00:25:57.971 [2024-12-11 15:02:40.427537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.971 [2024-12-11 15:02:40.427581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.971 qpair failed and we were unable to recover it. 00:25:57.971 [2024-12-11 15:02:40.427692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.971 [2024-12-11 15:02:40.427727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.971 qpair failed and we were unable to recover it. 00:25:57.971 [2024-12-11 15:02:40.427882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.971 [2024-12-11 15:02:40.427917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.971 qpair failed and we were unable to recover it. 00:25:57.971 [2024-12-11 15:02:40.428067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.971 [2024-12-11 15:02:40.428102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.971 qpair failed and we were unable to recover it. 00:25:57.971 [2024-12-11 15:02:40.428210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.971 [2024-12-11 15:02:40.428243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.971 qpair failed and we were unable to recover it. 00:25:57.971 [2024-12-11 15:02:40.428394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.971 [2024-12-11 15:02:40.428429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.971 qpair failed and we were unable to recover it. 00:25:57.971 [2024-12-11 15:02:40.428578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.971 [2024-12-11 15:02:40.428615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.971 qpair failed and we were unable to recover it. 00:25:57.971 [2024-12-11 15:02:40.428789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.971 [2024-12-11 15:02:40.428824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.971 qpair failed and we were unable to recover it. 00:25:57.971 [2024-12-11 15:02:40.428938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.971 [2024-12-11 15:02:40.428990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.971 qpair failed and we were unable to recover it. 00:25:57.971 [2024-12-11 15:02:40.429147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.971 [2024-12-11 15:02:40.429183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.971 qpair failed and we were unable to recover it. 00:25:57.971 [2024-12-11 15:02:40.429303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.971 [2024-12-11 15:02:40.429339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.971 qpair failed and we were unable to recover it. 00:25:57.971 [2024-12-11 15:02:40.429522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.971 [2024-12-11 15:02:40.429568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.971 qpair failed and we were unable to recover it. 00:25:57.971 [2024-12-11 15:02:40.429690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.971 [2024-12-11 15:02:40.429728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.971 qpair failed and we were unable to recover it. 00:25:57.971 [2024-12-11 15:02:40.429857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.971 [2024-12-11 15:02:40.429894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.971 qpair failed and we were unable to recover it. 00:25:57.971 [2024-12-11 15:02:40.430008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.971 [2024-12-11 15:02:40.430044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.971 qpair failed and we were unable to recover it. 00:25:57.971 [2024-12-11 15:02:40.430152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.971 [2024-12-11 15:02:40.430188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.971 qpair failed and we were unable to recover it. 00:25:57.971 [2024-12-11 15:02:40.430315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.971 [2024-12-11 15:02:40.430351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.971 qpair failed and we were unable to recover it. 00:25:57.971 [2024-12-11 15:02:40.430506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.971 [2024-12-11 15:02:40.430542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.971 qpair failed and we were unable to recover it. 00:25:57.971 [2024-12-11 15:02:40.430702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.971 [2024-12-11 15:02:40.430758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.971 qpair failed and we were unable to recover it. 00:25:57.971 [2024-12-11 15:02:40.430880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.971 [2024-12-11 15:02:40.430919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.971 qpair failed and we were unable to recover it. 00:25:57.971 [2024-12-11 15:02:40.431067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.972 [2024-12-11 15:02:40.431106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.972 qpair failed and we were unable to recover it. 00:25:57.972 [2024-12-11 15:02:40.431273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.972 [2024-12-11 15:02:40.431309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.972 qpair failed and we were unable to recover it. 00:25:57.972 [2024-12-11 15:02:40.431421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.972 [2024-12-11 15:02:40.431457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.972 qpair failed and we were unable to recover it. 00:25:57.972 [2024-12-11 15:02:40.431612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.972 [2024-12-11 15:02:40.431665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.972 qpair failed and we were unable to recover it. 00:25:57.972 [2024-12-11 15:02:40.431788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.972 [2024-12-11 15:02:40.431825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.972 qpair failed and we were unable to recover it. 00:25:57.972 [2024-12-11 15:02:40.431971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.972 [2024-12-11 15:02:40.432006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.972 qpair failed and we were unable to recover it. 00:25:57.972 [2024-12-11 15:02:40.432122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.972 [2024-12-11 15:02:40.432175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.972 qpair failed and we were unable to recover it. 00:25:57.972 [2024-12-11 15:02:40.432348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.972 [2024-12-11 15:02:40.432382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.972 qpair failed and we were unable to recover it. 00:25:57.972 [2024-12-11 15:02:40.432531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.972 [2024-12-11 15:02:40.432575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.972 qpair failed and we were unable to recover it. 00:25:57.972 [2024-12-11 15:02:40.432734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.972 [2024-12-11 15:02:40.432769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.972 qpair failed and we were unable to recover it. 00:25:57.972 [2024-12-11 15:02:40.432890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.972 [2024-12-11 15:02:40.432924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.972 qpair failed and we were unable to recover it. 00:25:57.972 [2024-12-11 15:02:40.433101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.972 [2024-12-11 15:02:40.433136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.972 qpair failed and we were unable to recover it. 00:25:57.972 [2024-12-11 15:02:40.433249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.972 [2024-12-11 15:02:40.433284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.972 qpair failed and we were unable to recover it. 00:25:57.972 [2024-12-11 15:02:40.433399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.972 [2024-12-11 15:02:40.433433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.972 qpair failed and we were unable to recover it. 00:25:57.972 [2024-12-11 15:02:40.433587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.972 [2024-12-11 15:02:40.433623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.972 qpair failed and we were unable to recover it. 00:25:57.972 [2024-12-11 15:02:40.433771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.972 [2024-12-11 15:02:40.433805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.972 qpair failed and we were unable to recover it. 00:25:57.972 [2024-12-11 15:02:40.433984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.972 [2024-12-11 15:02:40.434019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.972 qpair failed and we were unable to recover it. 00:25:57.972 [2024-12-11 15:02:40.434164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.972 [2024-12-11 15:02:40.434199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.972 qpair failed and we were unable to recover it. 00:25:57.972 [2024-12-11 15:02:40.434347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.972 [2024-12-11 15:02:40.434382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.972 qpair failed and we were unable to recover it. 00:25:57.972 [2024-12-11 15:02:40.434511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.972 [2024-12-11 15:02:40.434554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.972 qpair failed and we were unable to recover it. 00:25:57.972 [2024-12-11 15:02:40.434697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.972 [2024-12-11 15:02:40.434735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.972 qpair failed and we were unable to recover it. 00:25:57.972 [2024-12-11 15:02:40.434917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.972 [2024-12-11 15:02:40.434954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.972 qpair failed and we were unable to recover it. 00:25:57.972 [2024-12-11 15:02:40.435126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.972 [2024-12-11 15:02:40.435161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.972 qpair failed and we were unable to recover it. 00:25:57.972 [2024-12-11 15:02:40.435288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.972 [2024-12-11 15:02:40.435324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.972 qpair failed and we were unable to recover it. 00:25:57.972 [2024-12-11 15:02:40.435508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.972 [2024-12-11 15:02:40.435555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.972 qpair failed and we were unable to recover it. 00:25:57.972 [2024-12-11 15:02:40.435706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.972 [2024-12-11 15:02:40.435762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.972 qpair failed and we were unable to recover it. 00:25:57.972 [2024-12-11 15:02:40.435929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.972 [2024-12-11 15:02:40.435969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.972 qpair failed and we were unable to recover it. 00:25:57.972 [2024-12-11 15:02:40.436095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.972 [2024-12-11 15:02:40.436141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.972 qpair failed and we were unable to recover it. 00:25:57.972 [2024-12-11 15:02:40.436277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.972 [2024-12-11 15:02:40.436315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.972 qpair failed and we were unable to recover it. 00:25:57.972 [2024-12-11 15:02:40.436430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.972 [2024-12-11 15:02:40.436468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.972 qpair failed and we were unable to recover it. 00:25:57.972 [2024-12-11 15:02:40.436616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.972 [2024-12-11 15:02:40.436655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.972 qpair failed and we were unable to recover it. 00:25:57.972 [2024-12-11 15:02:40.436811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.972 [2024-12-11 15:02:40.436851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.972 qpair failed and we were unable to recover it. 00:25:57.972 [2024-12-11 15:02:40.437039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.972 [2024-12-11 15:02:40.437078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.972 qpair failed and we were unable to recover it. 00:25:57.972 [2024-12-11 15:02:40.437201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.972 [2024-12-11 15:02:40.437251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.972 qpair failed and we were unable to recover it. 00:25:57.972 [2024-12-11 15:02:40.437425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.972 [2024-12-11 15:02:40.437462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.972 qpair failed and we were unable to recover it. 00:25:57.973 [2024-12-11 15:02:40.437598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.973 [2024-12-11 15:02:40.437636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.973 qpair failed and we were unable to recover it. 00:25:57.973 [2024-12-11 15:02:40.437752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.973 [2024-12-11 15:02:40.437788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.973 qpair failed and we were unable to recover it. 00:25:57.973 [2024-12-11 15:02:40.437911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.973 [2024-12-11 15:02:40.437948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.973 qpair failed and we were unable to recover it. 00:25:57.973 [2024-12-11 15:02:40.438133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.973 [2024-12-11 15:02:40.438178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.973 qpair failed and we were unable to recover it. 00:25:57.973 [2024-12-11 15:02:40.438305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.973 [2024-12-11 15:02:40.438342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.973 qpair failed and we were unable to recover it. 00:25:57.973 [2024-12-11 15:02:40.438456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.973 [2024-12-11 15:02:40.438493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.973 qpair failed and we were unable to recover it. 00:25:57.973 [2024-12-11 15:02:40.438652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.973 [2024-12-11 15:02:40.438690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.973 qpair failed and we were unable to recover it. 00:25:57.973 [2024-12-11 15:02:40.438841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.973 [2024-12-11 15:02:40.438877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.973 qpair failed and we were unable to recover it. 00:25:57.973 [2024-12-11 15:02:40.439002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.973 [2024-12-11 15:02:40.439037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.973 qpair failed and we were unable to recover it. 00:25:57.973 [2024-12-11 15:02:40.439222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.973 [2024-12-11 15:02:40.439260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.973 qpair failed and we were unable to recover it. 00:25:57.973 [2024-12-11 15:02:40.439390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.973 [2024-12-11 15:02:40.439465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.973 qpair failed and we were unable to recover it. 00:25:57.973 [2024-12-11 15:02:40.439657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.973 [2024-12-11 15:02:40.439696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.973 qpair failed and we were unable to recover it. 00:25:57.973 [2024-12-11 15:02:40.439818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.973 [2024-12-11 15:02:40.439856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.973 qpair failed and we were unable to recover it. 00:25:57.973 [2024-12-11 15:02:40.439988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.973 [2024-12-11 15:02:40.440025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.973 qpair failed and we were unable to recover it. 00:25:57.973 [2024-12-11 15:02:40.440214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.973 [2024-12-11 15:02:40.440251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.973 qpair failed and we were unable to recover it. 00:25:57.973 [2024-12-11 15:02:40.440361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.973 [2024-12-11 15:02:40.440423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.973 qpair failed and we were unable to recover it. 00:25:57.973 [2024-12-11 15:02:40.440600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.973 [2024-12-11 15:02:40.440663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.973 qpair failed and we were unable to recover it. 00:25:57.973 [2024-12-11 15:02:40.440879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.973 [2024-12-11 15:02:40.440944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.973 qpair failed and we were unable to recover it. 00:25:57.973 [2024-12-11 15:02:40.441069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.973 [2024-12-11 15:02:40.441126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.973 qpair failed and we were unable to recover it. 00:25:57.973 [2024-12-11 15:02:40.441329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.973 [2024-12-11 15:02:40.441365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.973 qpair failed and we were unable to recover it. 00:25:57.973 [2024-12-11 15:02:40.441512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.973 [2024-12-11 15:02:40.441568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.973 qpair failed and we were unable to recover it. 00:25:57.973 [2024-12-11 15:02:40.441749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.973 [2024-12-11 15:02:40.441786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.973 qpair failed and we were unable to recover it. 00:25:57.973 [2024-12-11 15:02:40.441905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.973 [2024-12-11 15:02:40.441941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.973 qpair failed and we were unable to recover it. 00:25:57.973 [2024-12-11 15:02:40.442093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.973 [2024-12-11 15:02:40.442130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.973 qpair failed and we were unable to recover it. 00:25:57.973 [2024-12-11 15:02:40.442280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.973 [2024-12-11 15:02:40.442316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.973 qpair failed and we were unable to recover it. 00:25:57.973 [2024-12-11 15:02:40.442440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.973 [2024-12-11 15:02:40.442476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.973 qpair failed and we were unable to recover it. 00:25:57.973 [2024-12-11 15:02:40.442649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.973 [2024-12-11 15:02:40.442689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.973 qpair failed and we were unable to recover it. 00:25:57.973 [2024-12-11 15:02:40.442844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.973 [2024-12-11 15:02:40.442881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.973 qpair failed and we were unable to recover it. 00:25:57.973 [2024-12-11 15:02:40.443069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.973 [2024-12-11 15:02:40.443106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.973 qpair failed and we were unable to recover it. 00:25:57.973 [2024-12-11 15:02:40.443234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.973 [2024-12-11 15:02:40.443271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.973 qpair failed and we were unable to recover it. 00:25:57.973 [2024-12-11 15:02:40.443420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.973 [2024-12-11 15:02:40.443464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.973 qpair failed and we were unable to recover it. 00:25:57.973 [2024-12-11 15:02:40.443626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.973 [2024-12-11 15:02:40.443673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.973 qpair failed and we were unable to recover it. 00:25:57.973 [2024-12-11 15:02:40.443788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.973 [2024-12-11 15:02:40.443825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.973 qpair failed and we were unable to recover it. 00:25:57.973 [2024-12-11 15:02:40.444022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.973 [2024-12-11 15:02:40.444058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.973 qpair failed and we were unable to recover it. 00:25:57.973 [2024-12-11 15:02:40.444172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.973 [2024-12-11 15:02:40.444210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.973 qpair failed and we were unable to recover it. 00:25:57.973 [2024-12-11 15:02:40.444363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.973 [2024-12-11 15:02:40.444400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.973 qpair failed and we were unable to recover it. 00:25:57.973 [2024-12-11 15:02:40.444518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.973 [2024-12-11 15:02:40.444566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.973 qpair failed and we were unable to recover it. 00:25:57.973 [2024-12-11 15:02:40.444718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.973 [2024-12-11 15:02:40.444756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.973 qpair failed and we were unable to recover it. 00:25:57.973 [2024-12-11 15:02:40.444900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.974 [2024-12-11 15:02:40.444937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.974 qpair failed and we were unable to recover it. 00:25:57.974 [2024-12-11 15:02:40.445049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.974 [2024-12-11 15:02:40.445086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.974 qpair failed and we were unable to recover it. 00:25:57.974 [2024-12-11 15:02:40.445205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.974 [2024-12-11 15:02:40.445242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.974 qpair failed and we were unable to recover it. 00:25:57.974 [2024-12-11 15:02:40.445369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.974 [2024-12-11 15:02:40.445408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.974 qpair failed and we were unable to recover it. 00:25:57.974 [2024-12-11 15:02:40.445600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.974 [2024-12-11 15:02:40.445638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.974 qpair failed and we were unable to recover it. 00:25:57.974 [2024-12-11 15:02:40.445791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.974 [2024-12-11 15:02:40.445828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.974 qpair failed and we were unable to recover it. 00:25:57.974 [2024-12-11 15:02:40.445989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.974 [2024-12-11 15:02:40.446026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.974 qpair failed and we were unable to recover it. 00:25:57.974 [2024-12-11 15:02:40.446228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.974 [2024-12-11 15:02:40.446264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.974 qpair failed and we were unable to recover it. 00:25:57.974 [2024-12-11 15:02:40.446416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.974 [2024-12-11 15:02:40.446452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.974 qpair failed and we were unable to recover it. 00:25:57.974 [2024-12-11 15:02:40.446609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.974 [2024-12-11 15:02:40.446651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.974 qpair failed and we were unable to recover it. 00:25:57.974 [2024-12-11 15:02:40.446816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.974 [2024-12-11 15:02:40.446853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.974 qpair failed and we were unable to recover it. 00:25:57.974 [2024-12-11 15:02:40.446978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.974 [2024-12-11 15:02:40.447015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.974 qpair failed and we were unable to recover it. 00:25:57.974 [2024-12-11 15:02:40.447204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.974 [2024-12-11 15:02:40.447241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.974 qpair failed and we were unable to recover it. 00:25:57.974 [2024-12-11 15:02:40.447375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.974 [2024-12-11 15:02:40.447414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.974 qpair failed and we were unable to recover it. 00:25:57.974 [2024-12-11 15:02:40.447590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.974 [2024-12-11 15:02:40.447647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.974 qpair failed and we were unable to recover it. 00:25:57.974 [2024-12-11 15:02:40.447808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.974 [2024-12-11 15:02:40.447848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.974 qpair failed and we were unable to recover it. 00:25:57.974 [2024-12-11 15:02:40.447979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.974 [2024-12-11 15:02:40.448018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.974 qpair failed and we were unable to recover it. 00:25:57.974 [2024-12-11 15:02:40.448171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.974 [2024-12-11 15:02:40.448210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.974 qpair failed and we were unable to recover it. 00:25:57.974 [2024-12-11 15:02:40.448364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.974 [2024-12-11 15:02:40.448402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.974 qpair failed and we were unable to recover it. 00:25:57.974 [2024-12-11 15:02:40.448530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.974 [2024-12-11 15:02:40.448578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.974 qpair failed and we were unable to recover it. 00:25:57.974 [2024-12-11 15:02:40.448733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.974 [2024-12-11 15:02:40.448771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.974 qpair failed and we were unable to recover it. 00:25:57.974 [2024-12-11 15:02:40.448921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.974 [2024-12-11 15:02:40.448959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.974 qpair failed and we were unable to recover it. 00:25:57.974 [2024-12-11 15:02:40.449117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.974 [2024-12-11 15:02:40.449154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.974 qpair failed and we were unable to recover it. 00:25:57.974 [2024-12-11 15:02:40.449306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.974 [2024-12-11 15:02:40.449343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.974 qpair failed and we were unable to recover it. 00:25:57.974 [2024-12-11 15:02:40.449485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.974 [2024-12-11 15:02:40.449523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.974 qpair failed and we were unable to recover it. 00:25:57.974 [2024-12-11 15:02:40.449647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.974 [2024-12-11 15:02:40.449685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.974 qpair failed and we were unable to recover it. 00:25:57.974 [2024-12-11 15:02:40.449802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.974 [2024-12-11 15:02:40.449840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.974 qpair failed and we were unable to recover it. 00:25:57.974 [2024-12-11 15:02:40.450022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.974 [2024-12-11 15:02:40.450059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.974 qpair failed and we were unable to recover it. 00:25:57.974 [2024-12-11 15:02:40.450190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.974 [2024-12-11 15:02:40.450229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.974 qpair failed and we were unable to recover it. 00:25:57.974 [2024-12-11 15:02:40.450359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.974 [2024-12-11 15:02:40.450397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.974 qpair failed and we were unable to recover it. 00:25:57.974 [2024-12-11 15:02:40.450521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.974 [2024-12-11 15:02:40.450574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.974 qpair failed and we were unable to recover it. 00:25:57.974 [2024-12-11 15:02:40.450731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.974 [2024-12-11 15:02:40.450767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.974 qpair failed and we were unable to recover it. 00:25:57.974 [2024-12-11 15:02:40.450929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.974 [2024-12-11 15:02:40.450973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.974 qpair failed and we were unable to recover it. 00:25:57.974 [2024-12-11 15:02:40.451126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.974 [2024-12-11 15:02:40.451163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.974 qpair failed and we were unable to recover it. 00:25:57.974 [2024-12-11 15:02:40.451351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.974 [2024-12-11 15:02:40.451388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.974 qpair failed and we were unable to recover it. 00:25:57.974 [2024-12-11 15:02:40.451507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.974 [2024-12-11 15:02:40.451556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.974 qpair failed and we were unable to recover it. 00:25:57.974 [2024-12-11 15:02:40.451687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.974 [2024-12-11 15:02:40.451726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.974 qpair failed and we were unable to recover it. 00:25:57.974 [2024-12-11 15:02:40.451881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.974 [2024-12-11 15:02:40.451918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.974 qpair failed and we were unable to recover it. 00:25:57.974 [2024-12-11 15:02:40.452101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.974 [2024-12-11 15:02:40.452139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.974 qpair failed and we were unable to recover it. 00:25:57.974 [2024-12-11 15:02:40.452261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.974 [2024-12-11 15:02:40.452298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.975 qpair failed and we were unable to recover it. 00:25:57.975 [2024-12-11 15:02:40.452420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.975 [2024-12-11 15:02:40.452459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.975 qpair failed and we were unable to recover it. 00:25:57.975 [2024-12-11 15:02:40.452634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.975 [2024-12-11 15:02:40.452672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.975 qpair failed and we were unable to recover it. 00:25:57.975 [2024-12-11 15:02:40.452790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.975 [2024-12-11 15:02:40.452827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.975 qpair failed and we were unable to recover it. 00:25:57.975 [2024-12-11 15:02:40.453010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.975 [2024-12-11 15:02:40.453047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.975 qpair failed and we were unable to recover it. 00:25:57.975 [2024-12-11 15:02:40.453204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.975 [2024-12-11 15:02:40.453241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.975 qpair failed and we were unable to recover it. 00:25:57.975 [2024-12-11 15:02:40.453405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.975 [2024-12-11 15:02:40.453450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.975 qpair failed and we were unable to recover it. 00:25:57.975 [2024-12-11 15:02:40.453622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.975 [2024-12-11 15:02:40.453660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.975 qpair failed and we were unable to recover it. 00:25:57.975 [2024-12-11 15:02:40.453774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.975 [2024-12-11 15:02:40.453812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.975 qpair failed and we were unable to recover it. 00:25:57.975 [2024-12-11 15:02:40.453960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.975 [2024-12-11 15:02:40.453996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.975 qpair failed and we were unable to recover it. 00:25:57.975 [2024-12-11 15:02:40.454149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.975 [2024-12-11 15:02:40.454186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.975 qpair failed and we were unable to recover it. 00:25:57.975 [2024-12-11 15:02:40.454307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.975 [2024-12-11 15:02:40.454344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.975 qpair failed and we were unable to recover it. 00:25:57.975 [2024-12-11 15:02:40.454494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.975 [2024-12-11 15:02:40.454530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.975 qpair failed and we were unable to recover it. 00:25:57.975 [2024-12-11 15:02:40.454704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.975 [2024-12-11 15:02:40.454744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.975 qpair failed and we were unable to recover it. 00:25:57.975 [2024-12-11 15:02:40.454864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.975 [2024-12-11 15:02:40.454904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.975 qpair failed and we were unable to recover it. 00:25:57.975 [2024-12-11 15:02:40.455011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.975 [2024-12-11 15:02:40.455048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.975 qpair failed and we were unable to recover it. 00:25:57.975 [2024-12-11 15:02:40.455187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.975 [2024-12-11 15:02:40.455225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.975 qpair failed and we were unable to recover it. 00:25:57.975 [2024-12-11 15:02:40.455353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.975 [2024-12-11 15:02:40.455390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.975 qpair failed and we were unable to recover it. 00:25:57.975 [2024-12-11 15:02:40.455575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.975 [2024-12-11 15:02:40.455612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.975 qpair failed and we were unable to recover it. 00:25:57.975 [2024-12-11 15:02:40.455765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.975 [2024-12-11 15:02:40.455803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.975 qpair failed and we were unable to recover it. 00:25:57.975 [2024-12-11 15:02:40.455966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.975 [2024-12-11 15:02:40.456004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.975 qpair failed and we were unable to recover it. 00:25:57.975 [2024-12-11 15:02:40.456155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.975 [2024-12-11 15:02:40.456192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.975 qpair failed and we were unable to recover it. 00:25:57.975 [2024-12-11 15:02:40.456371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.975 [2024-12-11 15:02:40.456409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.975 qpair failed and we were unable to recover it. 00:25:57.975 [2024-12-11 15:02:40.456566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.975 [2024-12-11 15:02:40.456605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.975 qpair failed and we were unable to recover it. 00:25:57.975 [2024-12-11 15:02:40.456791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.975 [2024-12-11 15:02:40.456828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.975 qpair failed and we were unable to recover it. 00:25:57.975 [2024-12-11 15:02:40.456955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.975 [2024-12-11 15:02:40.456992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.975 qpair failed and we were unable to recover it. 00:25:57.975 [2024-12-11 15:02:40.457175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.975 [2024-12-11 15:02:40.457212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.975 qpair failed and we were unable to recover it. 00:25:57.975 [2024-12-11 15:02:40.457336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.975 [2024-12-11 15:02:40.457399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.975 qpair failed and we were unable to recover it. 00:25:57.975 [2024-12-11 15:02:40.457541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.975 [2024-12-11 15:02:40.457587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.975 qpair failed and we were unable to recover it. 00:25:57.975 [2024-12-11 15:02:40.457712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.975 [2024-12-11 15:02:40.457751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.975 qpair failed and we were unable to recover it. 00:25:57.975 [2024-12-11 15:02:40.457907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.975 [2024-12-11 15:02:40.457945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.975 qpair failed and we were unable to recover it. 00:25:57.975 [2024-12-11 15:02:40.458096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.975 [2024-12-11 15:02:40.458133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.975 qpair failed and we were unable to recover it. 00:25:57.975 [2024-12-11 15:02:40.458284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.975 [2024-12-11 15:02:40.458322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.975 qpair failed and we were unable to recover it. 00:25:57.975 [2024-12-11 15:02:40.458478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.975 [2024-12-11 15:02:40.458522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.975 qpair failed and we were unable to recover it. 00:25:57.975 [2024-12-11 15:02:40.458696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.975 [2024-12-11 15:02:40.458734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.975 qpair failed and we were unable to recover it. 00:25:57.975 [2024-12-11 15:02:40.458880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.975 [2024-12-11 15:02:40.458918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.975 qpair failed and we were unable to recover it. 00:25:57.975 [2024-12-11 15:02:40.459071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.975 [2024-12-11 15:02:40.459108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.975 qpair failed and we were unable to recover it. 00:25:57.975 [2024-12-11 15:02:40.459262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.975 [2024-12-11 15:02:40.459300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.975 qpair failed and we were unable to recover it. 00:25:57.975 [2024-12-11 15:02:40.459445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.975 [2024-12-11 15:02:40.459482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.975 qpair failed and we were unable to recover it. 00:25:57.975 [2024-12-11 15:02:40.459620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.975 [2024-12-11 15:02:40.459659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.976 qpair failed and we were unable to recover it. 00:25:57.976 [2024-12-11 15:02:40.459814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.976 [2024-12-11 15:02:40.459852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.976 qpair failed and we were unable to recover it. 00:25:57.976 [2024-12-11 15:02:40.459970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.976 [2024-12-11 15:02:40.460007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.976 qpair failed and we were unable to recover it. 00:25:57.976 [2024-12-11 15:02:40.460184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.976 [2024-12-11 15:02:40.460222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.976 qpair failed and we were unable to recover it. 00:25:57.976 [2024-12-11 15:02:40.460344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.976 [2024-12-11 15:02:40.460383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.976 qpair failed and we were unable to recover it. 00:25:57.976 [2024-12-11 15:02:40.460570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.976 [2024-12-11 15:02:40.460608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.976 qpair failed and we were unable to recover it. 00:25:57.976 [2024-12-11 15:02:40.460741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.976 [2024-12-11 15:02:40.460778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.976 qpair failed and we were unable to recover it. 00:25:57.976 [2024-12-11 15:02:40.460966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.976 [2024-12-11 15:02:40.461003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.976 qpair failed and we were unable to recover it. 00:25:57.976 [2024-12-11 15:02:40.461166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.976 [2024-12-11 15:02:40.461204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.976 qpair failed and we were unable to recover it. 00:25:57.976 [2024-12-11 15:02:40.461354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.976 [2024-12-11 15:02:40.461392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.976 qpair failed and we were unable to recover it. 00:25:57.976 [2024-12-11 15:02:40.461553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.976 [2024-12-11 15:02:40.461590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.976 qpair failed and we were unable to recover it. 00:25:57.976 [2024-12-11 15:02:40.461749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.976 [2024-12-11 15:02:40.461787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.976 qpair failed and we were unable to recover it. 00:25:57.976 [2024-12-11 15:02:40.461963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.976 [2024-12-11 15:02:40.462003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.976 qpair failed and we were unable to recover it. 00:25:57.976 [2024-12-11 15:02:40.462153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.976 [2024-12-11 15:02:40.462192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.976 qpair failed and we were unable to recover it. 00:25:57.976 [2024-12-11 15:02:40.462381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.976 [2024-12-11 15:02:40.462421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.976 qpair failed and we were unable to recover it. 00:25:57.976 [2024-12-11 15:02:40.462558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.976 [2024-12-11 15:02:40.462599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.976 qpair failed and we were unable to recover it. 00:25:57.976 [2024-12-11 15:02:40.462762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.976 [2024-12-11 15:02:40.462825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.976 qpair failed and we were unable to recover it. 00:25:57.976 [2024-12-11 15:02:40.463020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.976 [2024-12-11 15:02:40.463070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.976 qpair failed and we were unable to recover it. 00:25:57.976 [2024-12-11 15:02:40.463250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.976 [2024-12-11 15:02:40.463290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.976 qpair failed and we were unable to recover it. 00:25:57.976 [2024-12-11 15:02:40.463471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.976 [2024-12-11 15:02:40.463520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.976 qpair failed and we were unable to recover it. 00:25:57.976 [2024-12-11 15:02:40.463747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.976 [2024-12-11 15:02:40.463797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.976 qpair failed and we were unable to recover it. 00:25:57.976 [2024-12-11 15:02:40.463939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.976 [2024-12-11 15:02:40.463979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.976 qpair failed and we were unable to recover it. 00:25:57.976 [2024-12-11 15:02:40.464127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.976 [2024-12-11 15:02:40.464189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.976 qpair failed and we were unable to recover it. 00:25:57.976 [2024-12-11 15:02:40.464341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.976 [2024-12-11 15:02:40.464398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.976 qpair failed and we were unable to recover it. 00:25:57.976 [2024-12-11 15:02:40.464600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.976 [2024-12-11 15:02:40.464650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.976 qpair failed and we were unable to recover it. 00:25:57.976 [2024-12-11 15:02:40.464802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.976 [2024-12-11 15:02:40.464865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.976 qpair failed and we were unable to recover it. 00:25:57.976 [2024-12-11 15:02:40.465070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.976 [2024-12-11 15:02:40.465118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.976 qpair failed and we were unable to recover it. 00:25:57.976 [2024-12-11 15:02:40.465306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.976 [2024-12-11 15:02:40.465346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.976 qpair failed and we were unable to recover it. 00:25:57.976 [2024-12-11 15:02:40.465543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.976 [2024-12-11 15:02:40.465590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.976 qpair failed and we were unable to recover it. 00:25:57.976 [2024-12-11 15:02:40.465755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.976 [2024-12-11 15:02:40.465794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.976 qpair failed and we were unable to recover it. 00:25:57.976 [2024-12-11 15:02:40.465921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.976 [2024-12-11 15:02:40.465962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.976 qpair failed and we were unable to recover it. 00:25:57.976 [2024-12-11 15:02:40.466150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.976 [2024-12-11 15:02:40.466190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.976 qpair failed and we were unable to recover it. 00:25:57.976 [2024-12-11 15:02:40.466348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.976 [2024-12-11 15:02:40.466387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.976 qpair failed and we were unable to recover it. 00:25:57.976 [2024-12-11 15:02:40.466538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.976 [2024-12-11 15:02:40.466588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.976 qpair failed and we were unable to recover it. 00:25:57.976 [2024-12-11 15:02:40.466700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.976 [2024-12-11 15:02:40.466740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.976 qpair failed and we were unable to recover it. 00:25:57.976 [2024-12-11 15:02:40.466911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.976 [2024-12-11 15:02:40.466952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.976 qpair failed and we were unable to recover it. 00:25:57.976 [2024-12-11 15:02:40.467084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.976 [2024-12-11 15:02:40.467123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.976 qpair failed and we were unable to recover it. 00:25:57.976 [2024-12-11 15:02:40.467237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.976 [2024-12-11 15:02:40.467276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.976 qpair failed and we were unable to recover it. 00:25:57.976 [2024-12-11 15:02:40.467428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.976 [2024-12-11 15:02:40.467468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.976 qpair failed and we were unable to recover it. 00:25:57.976 [2024-12-11 15:02:40.467660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.976 [2024-12-11 15:02:40.467700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.977 qpair failed and we were unable to recover it. 00:25:57.977 [2024-12-11 15:02:40.467865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.977 [2024-12-11 15:02:40.467903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.977 qpair failed and we were unable to recover it. 00:25:57.977 [2024-12-11 15:02:40.468065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.977 [2024-12-11 15:02:40.468105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.977 qpair failed and we were unable to recover it. 00:25:57.977 [2024-12-11 15:02:40.468234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.977 [2024-12-11 15:02:40.468274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.977 qpair failed and we were unable to recover it. 00:25:57.977 [2024-12-11 15:02:40.468428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.977 [2024-12-11 15:02:40.468467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.977 qpair failed and we were unable to recover it. 00:25:57.977 [2024-12-11 15:02:40.468637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.977 [2024-12-11 15:02:40.468678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.977 qpair failed and we were unable to recover it. 00:25:57.977 [2024-12-11 15:02:40.468807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.977 [2024-12-11 15:02:40.468847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.977 qpair failed and we were unable to recover it. 00:25:57.977 [2024-12-11 15:02:40.469006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.977 [2024-12-11 15:02:40.469045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.977 qpair failed and we were unable to recover it. 00:25:57.977 [2024-12-11 15:02:40.469180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.977 [2024-12-11 15:02:40.469220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.977 qpair failed and we were unable to recover it. 00:25:57.977 [2024-12-11 15:02:40.469424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.977 [2024-12-11 15:02:40.469464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.977 qpair failed and we were unable to recover it. 00:25:57.977 [2024-12-11 15:02:40.469594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.977 [2024-12-11 15:02:40.469635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.977 qpair failed and we were unable to recover it. 00:25:57.977 [2024-12-11 15:02:40.469789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.977 [2024-12-11 15:02:40.469829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.977 qpair failed and we were unable to recover it. 00:25:57.977 [2024-12-11 15:02:40.469988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.977 [2024-12-11 15:02:40.470028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.977 qpair failed and we were unable to recover it. 00:25:57.977 [2024-12-11 15:02:40.470150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.977 [2024-12-11 15:02:40.470189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.977 qpair failed and we were unable to recover it. 00:25:57.977 [2024-12-11 15:02:40.470348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.977 [2024-12-11 15:02:40.470387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.977 qpair failed and we were unable to recover it. 00:25:57.977 [2024-12-11 15:02:40.470570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.977 [2024-12-11 15:02:40.470611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.977 qpair failed and we were unable to recover it. 00:25:57.977 [2024-12-11 15:02:40.470787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.977 [2024-12-11 15:02:40.470824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.977 qpair failed and we were unable to recover it. 00:25:57.977 [2024-12-11 15:02:40.470940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.977 [2024-12-11 15:02:40.470977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.977 qpair failed and we were unable to recover it. 00:25:57.977 [2024-12-11 15:02:40.471117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.977 [2024-12-11 15:02:40.471154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.977 qpair failed and we were unable to recover it. 00:25:57.977 [2024-12-11 15:02:40.471313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.977 [2024-12-11 15:02:40.471351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.977 qpair failed and we were unable to recover it. 00:25:57.977 [2024-12-11 15:02:40.471509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.977 [2024-12-11 15:02:40.471554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.977 qpair failed and we were unable to recover it. 00:25:57.977 [2024-12-11 15:02:40.471712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.977 [2024-12-11 15:02:40.471749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.977 qpair failed and we were unable to recover it. 00:25:57.977 [2024-12-11 15:02:40.471902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.977 [2024-12-11 15:02:40.471950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.977 qpair failed and we were unable to recover it. 00:25:57.977 [2024-12-11 15:02:40.472079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.977 [2024-12-11 15:02:40.472116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.977 qpair failed and we were unable to recover it. 00:25:57.977 [2024-12-11 15:02:40.472238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.977 [2024-12-11 15:02:40.472277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.977 qpair failed and we were unable to recover it. 00:25:57.977 [2024-12-11 15:02:40.472438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.977 [2024-12-11 15:02:40.472476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.977 qpair failed and we were unable to recover it. 00:25:57.977 [2024-12-11 15:02:40.472637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.977 [2024-12-11 15:02:40.472675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.977 qpair failed and we were unable to recover it. 00:25:57.977 [2024-12-11 15:02:40.472842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.977 [2024-12-11 15:02:40.472879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.977 qpair failed and we were unable to recover it. 00:25:57.977 [2024-12-11 15:02:40.473030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.977 [2024-12-11 15:02:40.473071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.977 qpair failed and we were unable to recover it. 00:25:57.977 [2024-12-11 15:02:40.473232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.977 [2024-12-11 15:02:40.473271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.977 qpair failed and we were unable to recover it. 00:25:57.977 [2024-12-11 15:02:40.473395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.977 [2024-12-11 15:02:40.473436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.977 qpair failed and we were unable to recover it. 00:25:57.977 [2024-12-11 15:02:40.473569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.977 [2024-12-11 15:02:40.473611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.977 qpair failed and we were unable to recover it. 00:25:57.977 [2024-12-11 15:02:40.473773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.977 [2024-12-11 15:02:40.473813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.977 qpair failed and we were unable to recover it. 00:25:57.977 [2024-12-11 15:02:40.473971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.977 [2024-12-11 15:02:40.474011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.977 qpair failed and we were unable to recover it. 00:25:57.977 [2024-12-11 15:02:40.474168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.977 [2024-12-11 15:02:40.474207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.977 qpair failed and we were unable to recover it. 00:25:57.977 [2024-12-11 15:02:40.474396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.978 [2024-12-11 15:02:40.474435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.978 qpair failed and we were unable to recover it. 00:25:57.978 [2024-12-11 15:02:40.474608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.978 [2024-12-11 15:02:40.474648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.978 qpair failed and we were unable to recover it. 00:25:57.978 [2024-12-11 15:02:40.474830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.978 [2024-12-11 15:02:40.474869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.978 qpair failed and we were unable to recover it. 00:25:57.978 [2024-12-11 15:02:40.475033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.978 [2024-12-11 15:02:40.475071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.978 qpair failed and we were unable to recover it. 00:25:57.978 [2024-12-11 15:02:40.475205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.978 [2024-12-11 15:02:40.475245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.978 qpair failed and we were unable to recover it. 00:25:57.978 [2024-12-11 15:02:40.475428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.978 [2024-12-11 15:02:40.475476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.978 qpair failed and we were unable to recover it. 00:25:57.978 [2024-12-11 15:02:40.475646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.978 [2024-12-11 15:02:40.475686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.978 qpair failed and we were unable to recover it. 00:25:57.978 [2024-12-11 15:02:40.475799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.978 [2024-12-11 15:02:40.475861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.978 qpair failed and we were unable to recover it. 00:25:57.978 [2024-12-11 15:02:40.476043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.978 [2024-12-11 15:02:40.476081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.978 qpair failed and we were unable to recover it. 00:25:57.978 [2024-12-11 15:02:40.476238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.978 [2024-12-11 15:02:40.476277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.978 qpair failed and we were unable to recover it. 00:25:57.978 [2024-12-11 15:02:40.476405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.978 [2024-12-11 15:02:40.476470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.978 qpair failed and we were unable to recover it. 00:25:57.978 [2024-12-11 15:02:40.476656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.978 [2024-12-11 15:02:40.476696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.978 qpair failed and we were unable to recover it. 00:25:57.978 [2024-12-11 15:02:40.476814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.978 [2024-12-11 15:02:40.476854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.978 qpair failed and we were unable to recover it. 00:25:57.978 [2024-12-11 15:02:40.477043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.978 [2024-12-11 15:02:40.477083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.978 qpair failed and we were unable to recover it. 00:25:57.978 [2024-12-11 15:02:40.477206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.978 [2024-12-11 15:02:40.477246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.978 qpair failed and we were unable to recover it. 00:25:57.978 [2024-12-11 15:02:40.477408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.978 [2024-12-11 15:02:40.477447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.978 qpair failed and we were unable to recover it. 00:25:57.978 [2024-12-11 15:02:40.477611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.978 [2024-12-11 15:02:40.477651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.978 qpair failed and we were unable to recover it. 00:25:57.978 [2024-12-11 15:02:40.477826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.978 [2024-12-11 15:02:40.477865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.978 qpair failed and we were unable to recover it. 00:25:57.978 [2024-12-11 15:02:40.477997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.978 [2024-12-11 15:02:40.478036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.978 qpair failed and we were unable to recover it. 00:25:57.978 [2024-12-11 15:02:40.478178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.978 [2024-12-11 15:02:40.478217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.978 qpair failed and we were unable to recover it. 00:25:57.978 [2024-12-11 15:02:40.478372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.978 [2024-12-11 15:02:40.478411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.978 qpair failed and we were unable to recover it. 00:25:57.978 [2024-12-11 15:02:40.478536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.978 [2024-12-11 15:02:40.478583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.978 qpair failed and we were unable to recover it. 00:25:57.978 [2024-12-11 15:02:40.478775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.978 [2024-12-11 15:02:40.478815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.978 qpair failed and we were unable to recover it. 00:25:57.978 [2024-12-11 15:02:40.478938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.978 [2024-12-11 15:02:40.478977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.978 qpair failed and we were unable to recover it. 00:25:57.978 [2024-12-11 15:02:40.479130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.978 [2024-12-11 15:02:40.479168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.978 qpair failed and we were unable to recover it. 00:25:57.978 [2024-12-11 15:02:40.479402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.978 [2024-12-11 15:02:40.479452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.978 qpair failed and we were unable to recover it. 00:25:57.978 [2024-12-11 15:02:40.479629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.978 [2024-12-11 15:02:40.479669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.978 qpair failed and we were unable to recover it. 00:25:57.978 [2024-12-11 15:02:40.479838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.978 [2024-12-11 15:02:40.479895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.978 qpair failed and we were unable to recover it. 00:25:57.978 [2024-12-11 15:02:40.480133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.978 [2024-12-11 15:02:40.480182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.978 qpair failed and we were unable to recover it. 00:25:57.978 [2024-12-11 15:02:40.480385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.978 [2024-12-11 15:02:40.480433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.978 qpair failed and we were unable to recover it. 00:25:57.978 [2024-12-11 15:02:40.480670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.978 [2024-12-11 15:02:40.480737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.978 qpair failed and we were unable to recover it. 00:25:57.978 [2024-12-11 15:02:40.480942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.978 [2024-12-11 15:02:40.481009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.978 qpair failed and we were unable to recover it. 00:25:57.978 [2024-12-11 15:02:40.481164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.978 [2024-12-11 15:02:40.481203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.978 qpair failed and we were unable to recover it. 00:25:57.978 [2024-12-11 15:02:40.481383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.978 [2024-12-11 15:02:40.481432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.978 qpair failed and we were unable to recover it. 00:25:57.978 [2024-12-11 15:02:40.481660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.978 [2024-12-11 15:02:40.481710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.978 qpair failed and we were unable to recover it. 00:25:57.978 [2024-12-11 15:02:40.481964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.978 [2024-12-11 15:02:40.482037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.978 qpair failed and we were unable to recover it. 00:25:57.978 [2024-12-11 15:02:40.482187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.978 [2024-12-11 15:02:40.482252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.978 qpair failed and we were unable to recover it. 00:25:57.978 [2024-12-11 15:02:40.482407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.978 [2024-12-11 15:02:40.482472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.978 qpair failed and we were unable to recover it. 00:25:57.978 [2024-12-11 15:02:40.482653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.978 [2024-12-11 15:02:40.482717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.978 qpair failed and we were unable to recover it. 00:25:57.978 [2024-12-11 15:02:40.482890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.979 [2024-12-11 15:02:40.482928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.979 qpair failed and we were unable to recover it. 00:25:57.979 [2024-12-11 15:02:40.483062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.979 [2024-12-11 15:02:40.483101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.979 qpair failed and we were unable to recover it. 00:25:57.979 [2024-12-11 15:02:40.483276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.979 [2024-12-11 15:02:40.483315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.979 qpair failed and we were unable to recover it. 00:25:57.979 [2024-12-11 15:02:40.483451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.979 [2024-12-11 15:02:40.483490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.979 qpair failed and we were unable to recover it. 00:25:57.979 [2024-12-11 15:02:40.483658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.979 [2024-12-11 15:02:40.483699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.979 qpair failed and we were unable to recover it. 00:25:57.979 [2024-12-11 15:02:40.483824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.979 [2024-12-11 15:02:40.483866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.979 qpair failed and we were unable to recover it. 00:25:57.979 [2024-12-11 15:02:40.484054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.979 [2024-12-11 15:02:40.484094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.979 qpair failed and we were unable to recover it. 00:25:57.979 [2024-12-11 15:02:40.484263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.979 [2024-12-11 15:02:40.484302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.979 qpair failed and we were unable to recover it. 00:25:57.979 [2024-12-11 15:02:40.484493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.979 [2024-12-11 15:02:40.484532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.979 qpair failed and we were unable to recover it. 00:25:57.979 [2024-12-11 15:02:40.484701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.979 [2024-12-11 15:02:40.484766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.979 qpair failed and we were unable to recover it. 00:25:57.979 [2024-12-11 15:02:40.484918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.979 [2024-12-11 15:02:40.484971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.979 qpair failed and we were unable to recover it. 00:25:57.979 [2024-12-11 15:02:40.485132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.979 [2024-12-11 15:02:40.485171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.979 qpair failed and we were unable to recover it. 00:25:57.979 [2024-12-11 15:02:40.485351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.979 [2024-12-11 15:02:40.485400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.979 qpair failed and we were unable to recover it. 00:25:57.979 [2024-12-11 15:02:40.485591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.979 [2024-12-11 15:02:40.485631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.979 qpair failed and we were unable to recover it. 00:25:57.979 [2024-12-11 15:02:40.485797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.979 [2024-12-11 15:02:40.485835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.979 qpair failed and we were unable to recover it. 00:25:57.979 [2024-12-11 15:02:40.485972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.979 [2024-12-11 15:02:40.486013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.979 qpair failed and we were unable to recover it. 00:25:57.979 [2024-12-11 15:02:40.486167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.979 [2024-12-11 15:02:40.486207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.979 qpair failed and we were unable to recover it. 00:25:57.979 [2024-12-11 15:02:40.486360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.979 [2024-12-11 15:02:40.486399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.979 qpair failed and we were unable to recover it. 00:25:57.979 [2024-12-11 15:02:40.486561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.979 [2024-12-11 15:02:40.486601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.979 qpair failed and we were unable to recover it. 00:25:57.979 [2024-12-11 15:02:40.486725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.979 [2024-12-11 15:02:40.486764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.979 qpair failed and we were unable to recover it. 00:25:57.979 [2024-12-11 15:02:40.486890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.979 [2024-12-11 15:02:40.486930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.979 qpair failed and we were unable to recover it. 00:25:57.979 [2024-12-11 15:02:40.487095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.979 [2024-12-11 15:02:40.487135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.979 qpair failed and we were unable to recover it. 00:25:57.979 [2024-12-11 15:02:40.487293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.979 [2024-12-11 15:02:40.487332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.979 qpair failed and we were unable to recover it. 00:25:57.979 [2024-12-11 15:02:40.487492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.979 [2024-12-11 15:02:40.487531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.979 qpair failed and we were unable to recover it. 00:25:57.979 [2024-12-11 15:02:40.487699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.979 [2024-12-11 15:02:40.487738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.979 qpair failed and we were unable to recover it. 00:25:57.979 [2024-12-11 15:02:40.487901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.979 [2024-12-11 15:02:40.487941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.979 qpair failed and we were unable to recover it. 00:25:57.979 [2024-12-11 15:02:40.488078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.979 [2024-12-11 15:02:40.488117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.979 qpair failed and we were unable to recover it. 00:25:57.979 [2024-12-11 15:02:40.488284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.979 [2024-12-11 15:02:40.488323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.979 qpair failed and we were unable to recover it. 00:25:57.979 [2024-12-11 15:02:40.488452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.979 [2024-12-11 15:02:40.488498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.979 qpair failed and we were unable to recover it. 00:25:57.979 [2024-12-11 15:02:40.488697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.979 [2024-12-11 15:02:40.488737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.979 qpair failed and we were unable to recover it. 00:25:57.979 [2024-12-11 15:02:40.488862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.979 [2024-12-11 15:02:40.488901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.979 qpair failed and we were unable to recover it. 00:25:57.979 [2024-12-11 15:02:40.489029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.979 [2024-12-11 15:02:40.489068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.979 qpair failed and we were unable to recover it. 00:25:57.979 [2024-12-11 15:02:40.489258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.979 [2024-12-11 15:02:40.489297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.979 qpair failed and we were unable to recover it. 00:25:57.979 [2024-12-11 15:02:40.489516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.979 [2024-12-11 15:02:40.489594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.979 qpair failed and we were unable to recover it. 00:25:57.979 [2024-12-11 15:02:40.489762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.979 [2024-12-11 15:02:40.489801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.979 qpair failed and we were unable to recover it. 00:25:57.979 [2024-12-11 15:02:40.489924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.979 [2024-12-11 15:02:40.489963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.979 qpair failed and we were unable to recover it. 00:25:57.979 [2024-12-11 15:02:40.490121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.979 [2024-12-11 15:02:40.490160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.979 qpair failed and we were unable to recover it. 00:25:57.979 [2024-12-11 15:02:40.490319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.979 [2024-12-11 15:02:40.490359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.979 qpair failed and we were unable to recover it. 00:25:57.979 [2024-12-11 15:02:40.490514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.979 [2024-12-11 15:02:40.490563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.979 qpair failed and we were unable to recover it. 00:25:57.979 [2024-12-11 15:02:40.490755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.980 [2024-12-11 15:02:40.490794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.980 qpair failed and we were unable to recover it. 00:25:57.980 [2024-12-11 15:02:40.490949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.980 [2024-12-11 15:02:40.490988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.980 qpair failed and we were unable to recover it. 00:25:57.980 [2024-12-11 15:02:40.491154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.980 [2024-12-11 15:02:40.491193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.980 qpair failed and we were unable to recover it. 00:25:57.980 [2024-12-11 15:02:40.491361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.980 [2024-12-11 15:02:40.491400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.980 qpair failed and we were unable to recover it. 00:25:57.980 [2024-12-11 15:02:40.491565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.980 [2024-12-11 15:02:40.491605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.980 qpair failed and we were unable to recover it. 00:25:57.980 [2024-12-11 15:02:40.491735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.980 [2024-12-11 15:02:40.491775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.980 qpair failed and we were unable to recover it. 00:25:57.980 [2024-12-11 15:02:40.491941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.980 [2024-12-11 15:02:40.491979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.980 qpair failed and we were unable to recover it. 00:25:57.980 [2024-12-11 15:02:40.492138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.980 [2024-12-11 15:02:40.492199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.980 qpair failed and we were unable to recover it. 00:25:57.980 [2024-12-11 15:02:40.492429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.980 [2024-12-11 15:02:40.492477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.980 qpair failed and we were unable to recover it. 00:25:57.980 [2024-12-11 15:02:40.492701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.980 [2024-12-11 15:02:40.492767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.980 qpair failed and we were unable to recover it. 00:25:57.980 [2024-12-11 15:02:40.492982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.980 [2024-12-11 15:02:40.493048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.980 qpair failed and we were unable to recover it. 00:25:57.980 [2024-12-11 15:02:40.493225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.980 [2024-12-11 15:02:40.493264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.980 qpair failed and we were unable to recover it. 00:25:57.980 [2024-12-11 15:02:40.493460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.980 [2024-12-11 15:02:40.493498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.980 qpair failed and we were unable to recover it. 00:25:57.980 [2024-12-11 15:02:40.493666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.980 [2024-12-11 15:02:40.493706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.980 qpair failed and we were unable to recover it. 00:25:57.980 [2024-12-11 15:02:40.493844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.980 [2024-12-11 15:02:40.493882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.980 qpair failed and we were unable to recover it. 00:25:57.980 [2024-12-11 15:02:40.494018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.980 [2024-12-11 15:02:40.494059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.980 qpair failed and we were unable to recover it. 00:25:57.980 [2024-12-11 15:02:40.494231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.980 [2024-12-11 15:02:40.494271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.980 qpair failed and we were unable to recover it. 00:25:57.980 [2024-12-11 15:02:40.494469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.980 [2024-12-11 15:02:40.494508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.980 qpair failed and we were unable to recover it. 00:25:57.980 [2024-12-11 15:02:40.494694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.980 [2024-12-11 15:02:40.494734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.980 qpair failed and we were unable to recover it. 00:25:57.980 [2024-12-11 15:02:40.494869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.980 [2024-12-11 15:02:40.494909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.980 qpair failed and we were unable to recover it. 00:25:57.980 [2024-12-11 15:02:40.495018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.980 [2024-12-11 15:02:40.495055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.980 qpair failed and we were unable to recover it. 00:25:57.980 [2024-12-11 15:02:40.495220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.980 [2024-12-11 15:02:40.495261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.980 qpair failed and we were unable to recover it. 00:25:57.980 [2024-12-11 15:02:40.495405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.980 [2024-12-11 15:02:40.495468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.980 qpair failed and we were unable to recover it. 00:25:57.980 [2024-12-11 15:02:40.495644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.980 [2024-12-11 15:02:40.495685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.980 qpair failed and we were unable to recover it. 00:25:57.980 [2024-12-11 15:02:40.495853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.980 [2024-12-11 15:02:40.495894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.980 qpair failed and we were unable to recover it. 00:25:57.980 [2024-12-11 15:02:40.496025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.980 [2024-12-11 15:02:40.496067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.980 qpair failed and we were unable to recover it. 00:25:57.980 [2024-12-11 15:02:40.496241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.980 [2024-12-11 15:02:40.496279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.980 qpair failed and we were unable to recover it. 00:25:57.980 [2024-12-11 15:02:40.496416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.980 [2024-12-11 15:02:40.496454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.980 qpair failed and we were unable to recover it. 00:25:57.980 [2024-12-11 15:02:40.496616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.980 [2024-12-11 15:02:40.496657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.980 qpair failed and we were unable to recover it. 00:25:57.980 [2024-12-11 15:02:40.496817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.980 [2024-12-11 15:02:40.496862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.980 qpair failed and we were unable to recover it. 00:25:57.980 [2024-12-11 15:02:40.497031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.980 [2024-12-11 15:02:40.497070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.980 qpair failed and we were unable to recover it. 00:25:57.980 [2024-12-11 15:02:40.497193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.980 [2024-12-11 15:02:40.497233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.980 qpair failed and we were unable to recover it. 00:25:57.980 [2024-12-11 15:02:40.497368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.980 [2024-12-11 15:02:40.497408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.980 qpair failed and we were unable to recover it. 00:25:57.980 [2024-12-11 15:02:40.497633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.980 [2024-12-11 15:02:40.497673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.980 qpair failed and we were unable to recover it. 00:25:57.980 [2024-12-11 15:02:40.497806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.980 [2024-12-11 15:02:40.497844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.980 qpair failed and we were unable to recover it. 00:25:57.980 [2024-12-11 15:02:40.498035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.980 [2024-12-11 15:02:40.498075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.980 qpair failed and we were unable to recover it. 00:25:57.980 [2024-12-11 15:02:40.498231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.980 [2024-12-11 15:02:40.498272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.980 qpair failed and we were unable to recover it. 00:25:57.980 [2024-12-11 15:02:40.498430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.980 [2024-12-11 15:02:40.498469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.980 qpair failed and we were unable to recover it. 00:25:57.980 [2024-12-11 15:02:40.498657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.980 [2024-12-11 15:02:40.498696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.980 qpair failed and we were unable to recover it. 00:25:57.980 [2024-12-11 15:02:40.498854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.981 [2024-12-11 15:02:40.498893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.981 qpair failed and we were unable to recover it. 00:25:57.981 [2024-12-11 15:02:40.499078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.981 [2024-12-11 15:02:40.499116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.981 qpair failed and we were unable to recover it. 00:25:57.981 [2024-12-11 15:02:40.499279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.981 [2024-12-11 15:02:40.499318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.981 qpair failed and we were unable to recover it. 00:25:57.981 [2024-12-11 15:02:40.499553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.981 [2024-12-11 15:02:40.499595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.981 qpair failed and we were unable to recover it. 00:25:57.981 [2024-12-11 15:02:40.499738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.981 [2024-12-11 15:02:40.499779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.981 qpair failed and we were unable to recover it. 00:25:57.981 [2024-12-11 15:02:40.499939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.981 [2024-12-11 15:02:40.499980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.981 qpair failed and we were unable to recover it. 00:25:57.981 [2024-12-11 15:02:40.500146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.981 [2024-12-11 15:02:40.500188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.981 qpair failed and we were unable to recover it. 00:25:57.981 [2024-12-11 15:02:40.500351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.981 [2024-12-11 15:02:40.500393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.981 qpair failed and we were unable to recover it. 00:25:57.981 [2024-12-11 15:02:40.500566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.981 [2024-12-11 15:02:40.500609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.981 qpair failed and we were unable to recover it. 00:25:57.981 [2024-12-11 15:02:40.500745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.981 [2024-12-11 15:02:40.500806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.981 qpair failed and we were unable to recover it. 00:25:57.981 [2024-12-11 15:02:40.501015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.981 [2024-12-11 15:02:40.501073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.981 qpair failed and we were unable to recover it. 00:25:57.981 [2024-12-11 15:02:40.501243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.981 [2024-12-11 15:02:40.501284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.981 qpair failed and we were unable to recover it. 00:25:57.981 [2024-12-11 15:02:40.501451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.981 [2024-12-11 15:02:40.501493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.981 qpair failed and we were unable to recover it. 00:25:57.981 [2024-12-11 15:02:40.501656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.981 [2024-12-11 15:02:40.501698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.981 qpair failed and we were unable to recover it. 00:25:57.981 [2024-12-11 15:02:40.501837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.981 [2024-12-11 15:02:40.501878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.981 qpair failed and we were unable to recover it. 00:25:57.981 [2024-12-11 15:02:40.502041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.981 [2024-12-11 15:02:40.502084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.981 qpair failed and we were unable to recover it. 00:25:57.981 [2024-12-11 15:02:40.502205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.981 [2024-12-11 15:02:40.502245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.981 qpair failed and we were unable to recover it. 00:25:57.981 [2024-12-11 15:02:40.502422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.981 [2024-12-11 15:02:40.502463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.981 qpair failed and we were unable to recover it. 00:25:57.981 [2024-12-11 15:02:40.502595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.981 [2024-12-11 15:02:40.502637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.981 qpair failed and we were unable to recover it. 00:25:57.981 [2024-12-11 15:02:40.502805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.981 [2024-12-11 15:02:40.502846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.981 qpair failed and we were unable to recover it. 00:25:57.981 [2024-12-11 15:02:40.503048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.981 [2024-12-11 15:02:40.503089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.981 qpair failed and we were unable to recover it. 00:25:57.981 [2024-12-11 15:02:40.503259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.981 [2024-12-11 15:02:40.503300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.981 qpair failed and we were unable to recover it. 00:25:57.981 [2024-12-11 15:02:40.503439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.981 [2024-12-11 15:02:40.503481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.981 qpair failed and we were unable to recover it. 00:25:57.981 [2024-12-11 15:02:40.503659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.981 [2024-12-11 15:02:40.503700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.981 qpair failed and we were unable to recover it. 00:25:57.981 [2024-12-11 15:02:40.503901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.981 [2024-12-11 15:02:40.503942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.981 qpair failed and we were unable to recover it. 00:25:57.981 [2024-12-11 15:02:40.504114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.981 [2024-12-11 15:02:40.504154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.981 qpair failed and we were unable to recover it. 00:25:57.981 [2024-12-11 15:02:40.504368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.981 [2024-12-11 15:02:40.504414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.981 qpair failed and we were unable to recover it. 00:25:57.981 [2024-12-11 15:02:40.504589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.981 [2024-12-11 15:02:40.504650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.981 qpair failed and we were unable to recover it. 00:25:57.981 [2024-12-11 15:02:40.504804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.981 [2024-12-11 15:02:40.504870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.981 qpair failed and we were unable to recover it. 00:25:57.981 [2024-12-11 15:02:40.505043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.981 [2024-12-11 15:02:40.505081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.981 qpair failed and we were unable to recover it. 00:25:57.981 [2024-12-11 15:02:40.505241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.981 [2024-12-11 15:02:40.505286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.981 qpair failed and we were unable to recover it. 00:25:57.981 [2024-12-11 15:02:40.505442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.981 [2024-12-11 15:02:40.505482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.981 qpair failed and we were unable to recover it. 00:25:57.981 [2024-12-11 15:02:40.505654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.981 [2024-12-11 15:02:40.505693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.981 qpair failed and we were unable to recover it. 00:25:57.981 [2024-12-11 15:02:40.505880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.981 [2024-12-11 15:02:40.505919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.981 qpair failed and we were unable to recover it. 00:25:57.981 [2024-12-11 15:02:40.506050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.981 [2024-12-11 15:02:40.506089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.981 qpair failed and we were unable to recover it. 00:25:57.982 [2024-12-11 15:02:40.506247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.982 [2024-12-11 15:02:40.506286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.982 qpair failed and we were unable to recover it. 00:25:57.982 [2024-12-11 15:02:40.506494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.982 [2024-12-11 15:02:40.506537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.982 qpair failed and we were unable to recover it. 00:25:57.982 [2024-12-11 15:02:40.506681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.982 [2024-12-11 15:02:40.506719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.982 qpair failed and we were unable to recover it. 00:25:57.982 [2024-12-11 15:02:40.506879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.982 [2024-12-11 15:02:40.506918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.982 qpair failed and we were unable to recover it. 00:25:57.982 [2024-12-11 15:02:40.507073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.982 [2024-12-11 15:02:40.507111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.982 qpair failed and we were unable to recover it. 00:25:57.982 [2024-12-11 15:02:40.507248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.982 [2024-12-11 15:02:40.507286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.982 qpair failed and we were unable to recover it. 00:25:57.982 [2024-12-11 15:02:40.507453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.982 [2024-12-11 15:02:40.507492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.982 qpair failed and we were unable to recover it. 00:25:57.982 [2024-12-11 15:02:40.507680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.982 [2024-12-11 15:02:40.507724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.982 qpair failed and we were unable to recover it. 00:25:57.982 [2024-12-11 15:02:40.507924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.982 [2024-12-11 15:02:40.507979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.982 qpair failed and we were unable to recover it. 00:25:57.982 [2024-12-11 15:02:40.508176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.982 [2024-12-11 15:02:40.508216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.982 qpair failed and we were unable to recover it. 00:25:57.982 [2024-12-11 15:02:40.508409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.982 [2024-12-11 15:02:40.508454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.982 qpair failed and we were unable to recover it. 00:25:57.982 [2024-12-11 15:02:40.508658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.982 [2024-12-11 15:02:40.508703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.982 qpair failed and we were unable to recover it. 00:25:57.982 [2024-12-11 15:02:40.508966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.982 [2024-12-11 15:02:40.509005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.982 qpair failed and we were unable to recover it. 00:25:57.982 [2024-12-11 15:02:40.509137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.982 [2024-12-11 15:02:40.509198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.982 qpair failed and we were unable to recover it. 00:25:57.982 [2024-12-11 15:02:40.509372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.982 [2024-12-11 15:02:40.509410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.982 qpair failed and we were unable to recover it. 00:25:57.982 [2024-12-11 15:02:40.509570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.982 [2024-12-11 15:02:40.509609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.982 qpair failed and we were unable to recover it. 00:25:57.982 [2024-12-11 15:02:40.509762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.982 [2024-12-11 15:02:40.509802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.982 qpair failed and we were unable to recover it. 00:25:57.982 [2024-12-11 15:02:40.509913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.982 [2024-12-11 15:02:40.509952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.982 qpair failed and we were unable to recover it. 00:25:57.982 [2024-12-11 15:02:40.510107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.982 [2024-12-11 15:02:40.510146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.982 qpair failed and we were unable to recover it. 00:25:57.982 [2024-12-11 15:02:40.510298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.982 [2024-12-11 15:02:40.510356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.982 qpair failed and we were unable to recover it. 00:25:57.982 [2024-12-11 15:02:40.510525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.982 [2024-12-11 15:02:40.510595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.982 qpair failed and we were unable to recover it. 00:25:57.982 [2024-12-11 15:02:40.510762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.982 [2024-12-11 15:02:40.510803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.982 qpair failed and we were unable to recover it. 00:25:57.982 [2024-12-11 15:02:40.510963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.982 [2024-12-11 15:02:40.511005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.982 qpair failed and we were unable to recover it. 00:25:57.982 [2024-12-11 15:02:40.511182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.982 [2024-12-11 15:02:40.511223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.982 qpair failed and we were unable to recover it. 00:25:57.982 [2024-12-11 15:02:40.511387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.982 [2024-12-11 15:02:40.511429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.982 qpair failed and we were unable to recover it. 00:25:57.982 [2024-12-11 15:02:40.511594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.982 [2024-12-11 15:02:40.511636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.982 qpair failed and we were unable to recover it. 00:25:57.982 [2024-12-11 15:02:40.511835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.982 [2024-12-11 15:02:40.511875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.982 qpair failed and we were unable to recover it. 00:25:57.982 [2024-12-11 15:02:40.512072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.982 [2024-12-11 15:02:40.512113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.982 qpair failed and we were unable to recover it. 00:25:57.982 [2024-12-11 15:02:40.512239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.982 [2024-12-11 15:02:40.512301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.982 qpair failed and we were unable to recover it. 00:25:57.982 [2024-12-11 15:02:40.512485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.982 [2024-12-11 15:02:40.512526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.982 qpair failed and we were unable to recover it. 00:25:57.982 [2024-12-11 15:02:40.512705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.982 [2024-12-11 15:02:40.512747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.982 qpair failed and we were unable to recover it. 00:25:57.982 [2024-12-11 15:02:40.512912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.982 [2024-12-11 15:02:40.512953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.982 qpair failed and we were unable to recover it. 00:25:57.982 [2024-12-11 15:02:40.513116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.982 [2024-12-11 15:02:40.513159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.982 qpair failed and we were unable to recover it. 00:25:57.982 [2024-12-11 15:02:40.513321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.982 [2024-12-11 15:02:40.513362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.982 qpair failed and we were unable to recover it. 00:25:57.982 [2024-12-11 15:02:40.513531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.982 [2024-12-11 15:02:40.513584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.982 qpair failed and we were unable to recover it. 00:25:57.982 [2024-12-11 15:02:40.513729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.982 [2024-12-11 15:02:40.513782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.982 qpair failed and we were unable to recover it. 00:25:57.982 [2024-12-11 15:02:40.513982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.982 [2024-12-11 15:02:40.514023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.982 qpair failed and we were unable to recover it. 00:25:57.982 [2024-12-11 15:02:40.514200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.982 [2024-12-11 15:02:40.514240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.982 qpair failed and we were unable to recover it. 00:25:57.982 [2024-12-11 15:02:40.514404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.982 [2024-12-11 15:02:40.514444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.982 qpair failed and we were unable to recover it. 00:25:57.983 [2024-12-11 15:02:40.514615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.983 [2024-12-11 15:02:40.514657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.983 qpair failed and we were unable to recover it. 00:25:57.983 [2024-12-11 15:02:40.514829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.983 [2024-12-11 15:02:40.514869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.983 qpair failed and we were unable to recover it. 00:25:57.983 [2024-12-11 15:02:40.515007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.983 [2024-12-11 15:02:40.515050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.983 qpair failed and we were unable to recover it. 00:25:57.983 [2024-12-11 15:02:40.515218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.983 [2024-12-11 15:02:40.515260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.983 qpair failed and we were unable to recover it. 00:25:57.983 [2024-12-11 15:02:40.515378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.983 [2024-12-11 15:02:40.515418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.983 qpair failed and we were unable to recover it. 00:25:57.983 [2024-12-11 15:02:40.515595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.983 [2024-12-11 15:02:40.515637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.983 qpair failed and we were unable to recover it. 00:25:57.983 [2024-12-11 15:02:40.515780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.983 [2024-12-11 15:02:40.515821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.983 qpair failed and we were unable to recover it. 00:25:57.983 [2024-12-11 15:02:40.516014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.983 [2024-12-11 15:02:40.516055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.983 qpair failed and we were unable to recover it. 00:25:57.983 [2024-12-11 15:02:40.516222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.983 [2024-12-11 15:02:40.516263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.983 qpair failed and we were unable to recover it. 00:25:57.983 [2024-12-11 15:02:40.516426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.983 [2024-12-11 15:02:40.516470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.983 qpair failed and we were unable to recover it. 00:25:57.983 [2024-12-11 15:02:40.516630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.983 [2024-12-11 15:02:40.516673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.983 qpair failed and we were unable to recover it. 00:25:57.983 [2024-12-11 15:02:40.516814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.983 [2024-12-11 15:02:40.516873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.983 qpair failed and we were unable to recover it. 00:25:57.983 [2024-12-11 15:02:40.517071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.983 [2024-12-11 15:02:40.517113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.983 qpair failed and we were unable to recover it. 00:25:57.983 [2024-12-11 15:02:40.517279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.983 [2024-12-11 15:02:40.517322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.983 qpair failed and we were unable to recover it. 00:25:57.983 [2024-12-11 15:02:40.517493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.983 [2024-12-11 15:02:40.517534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.983 qpair failed and we were unable to recover it. 00:25:57.983 [2024-12-11 15:02:40.517790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.983 [2024-12-11 15:02:40.517831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.983 qpair failed and we were unable to recover it. 00:25:57.983 [2024-12-11 15:02:40.518005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.983 [2024-12-11 15:02:40.518046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.983 qpair failed and we were unable to recover it. 00:25:57.983 [2024-12-11 15:02:40.518242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.983 [2024-12-11 15:02:40.518283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.983 qpair failed and we were unable to recover it. 00:25:57.983 [2024-12-11 15:02:40.518412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.983 [2024-12-11 15:02:40.518453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.983 qpair failed and we were unable to recover it. 00:25:57.983 [2024-12-11 15:02:40.518617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.983 [2024-12-11 15:02:40.518659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.983 qpair failed and we were unable to recover it. 00:25:57.983 [2024-12-11 15:02:40.518834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.983 [2024-12-11 15:02:40.518875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.983 qpair failed and we were unable to recover it. 00:25:57.983 [2024-12-11 15:02:40.519034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.983 [2024-12-11 15:02:40.519075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.983 qpair failed and we were unable to recover it. 00:25:57.983 [2024-12-11 15:02:40.519271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.983 [2024-12-11 15:02:40.519312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.983 qpair failed and we were unable to recover it. 00:25:57.983 [2024-12-11 15:02:40.519472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.983 [2024-12-11 15:02:40.519514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.983 qpair failed and we were unable to recover it. 00:25:57.983 [2024-12-11 15:02:40.519633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.983 [2024-12-11 15:02:40.519674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.983 qpair failed and we were unable to recover it. 00:25:57.983 [2024-12-11 15:02:40.519805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.983 [2024-12-11 15:02:40.519846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.983 qpair failed and we were unable to recover it. 00:25:57.983 [2024-12-11 15:02:40.519974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.983 [2024-12-11 15:02:40.520014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.983 qpair failed and we were unable to recover it. 00:25:57.983 [2024-12-11 15:02:40.520213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.983 [2024-12-11 15:02:40.520253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.983 qpair failed and we were unable to recover it. 00:25:57.983 [2024-12-11 15:02:40.520429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.983 [2024-12-11 15:02:40.520470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.983 qpair failed and we were unable to recover it. 00:25:57.983 [2024-12-11 15:02:40.520644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.983 [2024-12-11 15:02:40.520685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.983 qpair failed and we were unable to recover it. 00:25:57.983 [2024-12-11 15:02:40.520857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.983 [2024-12-11 15:02:40.520898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.983 qpair failed and we were unable to recover it. 00:25:57.983 [2024-12-11 15:02:40.521063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.983 [2024-12-11 15:02:40.521104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.983 qpair failed and we were unable to recover it. 00:25:57.983 [2024-12-11 15:02:40.521242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.983 [2024-12-11 15:02:40.521283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.983 qpair failed and we were unable to recover it. 00:25:57.983 [2024-12-11 15:02:40.521480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.983 [2024-12-11 15:02:40.521520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.983 qpair failed and we were unable to recover it. 00:25:57.983 [2024-12-11 15:02:40.521674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.983 [2024-12-11 15:02:40.521716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.983 qpair failed and we were unable to recover it. 00:25:57.983 [2024-12-11 15:02:40.521918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.983 [2024-12-11 15:02:40.521959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.983 qpair failed and we were unable to recover it. 00:25:57.983 [2024-12-11 15:02:40.522095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.983 [2024-12-11 15:02:40.522142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.983 qpair failed and we were unable to recover it. 00:25:57.983 [2024-12-11 15:02:40.522367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.983 [2024-12-11 15:02:40.522416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.983 qpair failed and we were unable to recover it. 00:25:57.983 [2024-12-11 15:02:40.522587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.983 [2024-12-11 15:02:40.522629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.983 qpair failed and we were unable to recover it. 00:25:57.983 [2024-12-11 15:02:40.522763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.984 [2024-12-11 15:02:40.522803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.984 qpair failed and we were unable to recover it. 00:25:57.984 [2024-12-11 15:02:40.522976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.984 [2024-12-11 15:02:40.523016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.984 qpair failed and we were unable to recover it. 00:25:57.984 [2024-12-11 15:02:40.523199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.984 [2024-12-11 15:02:40.523240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.984 qpair failed and we were unable to recover it. 00:25:57.984 [2024-12-11 15:02:40.523367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.984 [2024-12-11 15:02:40.523408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.984 qpair failed and we were unable to recover it. 00:25:57.984 [2024-12-11 15:02:40.523555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.984 [2024-12-11 15:02:40.523597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.984 qpair failed and we were unable to recover it. 00:25:57.984 [2024-12-11 15:02:40.523740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.984 [2024-12-11 15:02:40.523781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.984 qpair failed and we were unable to recover it. 00:25:57.984 [2024-12-11 15:02:40.523946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.984 [2024-12-11 15:02:40.523986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.984 qpair failed and we were unable to recover it. 00:25:57.984 [2024-12-11 15:02:40.524152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.984 [2024-12-11 15:02:40.524193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.984 qpair failed and we were unable to recover it. 00:25:57.984 [2024-12-11 15:02:40.524366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.984 [2024-12-11 15:02:40.524406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.984 qpair failed and we were unable to recover it. 00:25:57.984 [2024-12-11 15:02:40.524573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.984 [2024-12-11 15:02:40.524617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.984 qpair failed and we were unable to recover it. 00:25:57.984 [2024-12-11 15:02:40.524783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.984 [2024-12-11 15:02:40.524824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.984 qpair failed and we were unable to recover it. 00:25:57.984 [2024-12-11 15:02:40.524997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.984 [2024-12-11 15:02:40.525040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.984 qpair failed and we were unable to recover it. 00:25:57.984 [2024-12-11 15:02:40.525160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.984 [2024-12-11 15:02:40.525202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.984 qpair failed and we were unable to recover it. 00:25:57.984 [2024-12-11 15:02:40.525316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.984 [2024-12-11 15:02:40.525357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.984 qpair failed and we were unable to recover it. 00:25:57.984 [2024-12-11 15:02:40.525529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.984 [2024-12-11 15:02:40.525595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.984 qpair failed and we were unable to recover it. 00:25:57.984 [2024-12-11 15:02:40.525743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.984 [2024-12-11 15:02:40.525785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.984 qpair failed and we were unable to recover it. 00:25:57.984 [2024-12-11 15:02:40.525954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.984 [2024-12-11 15:02:40.525996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.984 qpair failed and we were unable to recover it. 00:25:57.984 [2024-12-11 15:02:40.526193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.984 [2024-12-11 15:02:40.526235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.984 qpair failed and we were unable to recover it. 00:25:57.984 [2024-12-11 15:02:40.526369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.984 [2024-12-11 15:02:40.526409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.984 qpair failed and we were unable to recover it. 00:25:57.984 [2024-12-11 15:02:40.526577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.984 [2024-12-11 15:02:40.526619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.984 qpair failed and we were unable to recover it. 00:25:57.984 [2024-12-11 15:02:40.526769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.984 [2024-12-11 15:02:40.526811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.984 qpair failed and we were unable to recover it. 00:25:57.984 [2024-12-11 15:02:40.526943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.984 [2024-12-11 15:02:40.526986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.984 qpair failed and we were unable to recover it. 00:25:57.984 [2024-12-11 15:02:40.527121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.984 [2024-12-11 15:02:40.527164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.984 qpair failed and we were unable to recover it. 00:25:57.984 [2024-12-11 15:02:40.527330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.984 [2024-12-11 15:02:40.527372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.984 qpair failed and we were unable to recover it. 00:25:57.984 [2024-12-11 15:02:40.527505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.984 [2024-12-11 15:02:40.527554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.984 qpair failed and we were unable to recover it. 00:25:57.984 [2024-12-11 15:02:40.527721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.984 [2024-12-11 15:02:40.527762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.984 qpair failed and we were unable to recover it. 00:25:57.984 [2024-12-11 15:02:40.527931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.984 [2024-12-11 15:02:40.527974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.984 qpair failed and we were unable to recover it. 00:25:57.984 [2024-12-11 15:02:40.528127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.984 [2024-12-11 15:02:40.528168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.984 qpair failed and we were unable to recover it. 00:25:57.984 [2024-12-11 15:02:40.528366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.984 [2024-12-11 15:02:40.528407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.984 qpair failed and we were unable to recover it. 00:25:57.984 [2024-12-11 15:02:40.528574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.984 [2024-12-11 15:02:40.528617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.984 qpair failed and we were unable to recover it. 00:25:57.984 [2024-12-11 15:02:40.528768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.984 [2024-12-11 15:02:40.528809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.984 qpair failed and we were unable to recover it. 00:25:57.984 [2024-12-11 15:02:40.528976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.984 [2024-12-11 15:02:40.529017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.984 qpair failed and we were unable to recover it. 00:25:57.984 [2024-12-11 15:02:40.529168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.984 [2024-12-11 15:02:40.529208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.984 qpair failed and we were unable to recover it. 00:25:57.984 [2024-12-11 15:02:40.529335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.984 [2024-12-11 15:02:40.529376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.984 qpair failed and we were unable to recover it. 00:25:57.984 [2024-12-11 15:02:40.529512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.984 [2024-12-11 15:02:40.529562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.984 qpair failed and we were unable to recover it. 00:25:57.984 [2024-12-11 15:02:40.529705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.984 [2024-12-11 15:02:40.529747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.984 qpair failed and we were unable to recover it. 00:25:57.984 [2024-12-11 15:02:40.529897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.984 [2024-12-11 15:02:40.529938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.984 qpair failed and we were unable to recover it. 00:25:57.984 [2024-12-11 15:02:40.530108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.984 [2024-12-11 15:02:40.530156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.984 qpair failed and we were unable to recover it. 00:25:57.984 [2024-12-11 15:02:40.530346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.984 [2024-12-11 15:02:40.530387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.984 qpair failed and we were unable to recover it. 00:25:57.984 [2024-12-11 15:02:40.530584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.985 [2024-12-11 15:02:40.530626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.985 qpair failed and we were unable to recover it. 00:25:57.985 [2024-12-11 15:02:40.530764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.985 [2024-12-11 15:02:40.530805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.985 qpair failed and we were unable to recover it. 00:25:57.985 [2024-12-11 15:02:40.530999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.985 [2024-12-11 15:02:40.531040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.985 qpair failed and we were unable to recover it. 00:25:57.985 [2024-12-11 15:02:40.531201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.985 [2024-12-11 15:02:40.531242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.985 qpair failed and we were unable to recover it. 00:25:57.985 [2024-12-11 15:02:40.531420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.985 [2024-12-11 15:02:40.531462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.985 qpair failed and we were unable to recover it. 00:25:57.985 [2024-12-11 15:02:40.531632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.985 [2024-12-11 15:02:40.531674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.985 qpair failed and we were unable to recover it. 00:25:57.985 [2024-12-11 15:02:40.531797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.985 [2024-12-11 15:02:40.531839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.985 qpair failed and we were unable to recover it. 00:25:57.985 [2024-12-11 15:02:40.531977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.985 [2024-12-11 15:02:40.532021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.985 qpair failed and we were unable to recover it. 00:25:57.985 [2024-12-11 15:02:40.532182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.985 [2024-12-11 15:02:40.532222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.985 qpair failed and we were unable to recover it. 00:25:57.985 [2024-12-11 15:02:40.532421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.985 [2024-12-11 15:02:40.532462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.985 qpair failed and we were unable to recover it. 00:25:57.985 [2024-12-11 15:02:40.532589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.985 [2024-12-11 15:02:40.532631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.985 qpair failed and we were unable to recover it. 00:25:57.985 [2024-12-11 15:02:40.532827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.985 [2024-12-11 15:02:40.532868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.985 qpair failed and we were unable to recover it. 00:25:57.985 [2024-12-11 15:02:40.533009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.985 [2024-12-11 15:02:40.533071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.985 qpair failed and we were unable to recover it. 00:25:57.985 [2024-12-11 15:02:40.533265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.985 [2024-12-11 15:02:40.533307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.985 qpair failed and we were unable to recover it. 00:25:57.985 [2024-12-11 15:02:40.533466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.985 [2024-12-11 15:02:40.533507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.985 qpair failed and we were unable to recover it. 00:25:57.985 [2024-12-11 15:02:40.533695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.985 [2024-12-11 15:02:40.533737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.985 qpair failed and we were unable to recover it. 00:25:57.985 [2024-12-11 15:02:40.533922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.985 [2024-12-11 15:02:40.533964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.985 qpair failed and we were unable to recover it. 00:25:57.985 [2024-12-11 15:02:40.534079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.985 [2024-12-11 15:02:40.534120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.985 qpair failed and we were unable to recover it. 00:25:57.985 [2024-12-11 15:02:40.534303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.985 [2024-12-11 15:02:40.534352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.985 qpair failed and we were unable to recover it. 00:25:57.985 [2024-12-11 15:02:40.534531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.985 [2024-12-11 15:02:40.534585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.985 qpair failed and we were unable to recover it. 00:25:57.985 [2024-12-11 15:02:40.534758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.985 [2024-12-11 15:02:40.534800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.985 qpair failed and we were unable to recover it. 00:25:57.985 [2024-12-11 15:02:40.535018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.985 [2024-12-11 15:02:40.535068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.985 qpair failed and we were unable to recover it. 00:25:57.985 [2024-12-11 15:02:40.535214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.985 [2024-12-11 15:02:40.535279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.985 qpair failed and we were unable to recover it. 00:25:57.985 [2024-12-11 15:02:40.535493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.985 [2024-12-11 15:02:40.535542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.985 qpair failed and we were unable to recover it. 00:25:57.985 [2024-12-11 15:02:40.535764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.985 [2024-12-11 15:02:40.535837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.985 qpair failed and we were unable to recover it. 00:25:57.985 [2024-12-11 15:02:40.536079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.985 [2024-12-11 15:02:40.536121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.985 qpair failed and we were unable to recover it. 00:25:57.985 [2024-12-11 15:02:40.536258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.985 [2024-12-11 15:02:40.536300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.985 qpair failed and we were unable to recover it. 00:25:57.985 [2024-12-11 15:02:40.536472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.985 [2024-12-11 15:02:40.536512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.985 qpair failed and we were unable to recover it. 00:25:57.985 [2024-12-11 15:02:40.536743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.985 [2024-12-11 15:02:40.536805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.985 qpair failed and we were unable to recover it. 00:25:57.985 [2024-12-11 15:02:40.536984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.985 [2024-12-11 15:02:40.537028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.985 qpair failed and we were unable to recover it. 00:25:57.985 [2024-12-11 15:02:40.537232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.985 [2024-12-11 15:02:40.537275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.985 qpair failed and we were unable to recover it. 00:25:57.985 [2024-12-11 15:02:40.537419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.985 [2024-12-11 15:02:40.537462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.985 qpair failed and we were unable to recover it. 00:25:57.985 [2024-12-11 15:02:40.537633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.985 [2024-12-11 15:02:40.537662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.985 qpair failed and we were unable to recover it. 00:25:57.985 [2024-12-11 15:02:40.537790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.985 [2024-12-11 15:02:40.537818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.985 qpair failed and we were unable to recover it. 00:25:57.985 [2024-12-11 15:02:40.538015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.985 [2024-12-11 15:02:40.538056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.985 qpair failed and we were unable to recover it. 00:25:57.985 [2024-12-11 15:02:40.538189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.985 [2024-12-11 15:02:40.538231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.985 qpair failed and we were unable to recover it. 00:25:57.985 [2024-12-11 15:02:40.538372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.985 [2024-12-11 15:02:40.538421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.985 qpair failed and we were unable to recover it. 00:25:57.985 [2024-12-11 15:02:40.538597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.985 [2024-12-11 15:02:40.538641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.985 qpair failed and we were unable to recover it. 00:25:57.985 [2024-12-11 15:02:40.538874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.985 [2024-12-11 15:02:40.538923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.985 qpair failed and we were unable to recover it. 00:25:57.986 [2024-12-11 15:02:40.539082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.986 [2024-12-11 15:02:40.539111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.986 qpair failed and we were unable to recover it. 00:25:57.986 [2024-12-11 15:02:40.539208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.986 [2024-12-11 15:02:40.539236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.986 qpair failed and we were unable to recover it. 00:25:57.986 [2024-12-11 15:02:40.539342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.986 [2024-12-11 15:02:40.539371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.986 qpair failed and we were unable to recover it. 00:25:57.986 [2024-12-11 15:02:40.539521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.986 [2024-12-11 15:02:40.539560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.986 qpair failed and we were unable to recover it. 00:25:57.986 [2024-12-11 15:02:40.539699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.986 [2024-12-11 15:02:40.539725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.986 qpair failed and we were unable to recover it. 00:25:57.986 [2024-12-11 15:02:40.539844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.986 [2024-12-11 15:02:40.539870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.986 qpair failed and we were unable to recover it. 00:25:57.986 [2024-12-11 15:02:40.539972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.986 [2024-12-11 15:02:40.539999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.986 qpair failed and we were unable to recover it. 00:25:57.986 [2024-12-11 15:02:40.540151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.986 [2024-12-11 15:02:40.540178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.986 qpair failed and we were unable to recover it. 00:25:57.986 [2024-12-11 15:02:40.540272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.986 [2024-12-11 15:02:40.540304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.986 qpair failed and we were unable to recover it. 00:25:57.986 [2024-12-11 15:02:40.540426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.986 [2024-12-11 15:02:40.540454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.986 qpair failed and we were unable to recover it. 00:25:57.986 [2024-12-11 15:02:40.540584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.986 [2024-12-11 15:02:40.540612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.986 qpair failed and we were unable to recover it. 00:25:57.986 [2024-12-11 15:02:40.540754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.986 [2024-12-11 15:02:40.540780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.986 qpair failed and we were unable to recover it. 00:25:57.986 [2024-12-11 15:02:40.540901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.986 [2024-12-11 15:02:40.540927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.986 qpair failed and we were unable to recover it. 00:25:57.986 [2024-12-11 15:02:40.541023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.986 [2024-12-11 15:02:40.541050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.986 qpair failed and we were unable to recover it. 00:25:57.986 [2024-12-11 15:02:40.541150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.986 [2024-12-11 15:02:40.541176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.986 qpair failed and we were unable to recover it. 00:25:57.986 [2024-12-11 15:02:40.541259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.986 [2024-12-11 15:02:40.541285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.986 qpair failed and we were unable to recover it. 00:25:57.986 [2024-12-11 15:02:40.541385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.986 [2024-12-11 15:02:40.541411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.986 qpair failed and we were unable to recover it. 00:25:57.986 [2024-12-11 15:02:40.541526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.986 [2024-12-11 15:02:40.541571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.986 qpair failed and we were unable to recover it. 00:25:57.986 [2024-12-11 15:02:40.541691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.986 [2024-12-11 15:02:40.541718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.986 qpair failed and we were unable to recover it. 00:25:57.986 [2024-12-11 15:02:40.541838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.986 [2024-12-11 15:02:40.541864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.986 qpair failed and we were unable to recover it. 00:25:57.986 [2024-12-11 15:02:40.541959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.986 [2024-12-11 15:02:40.541986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.986 qpair failed and we were unable to recover it. 00:25:57.986 [2024-12-11 15:02:40.542069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.986 [2024-12-11 15:02:40.542097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.986 qpair failed and we were unable to recover it. 00:25:57.986 [2024-12-11 15:02:40.542216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.986 [2024-12-11 15:02:40.542243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.986 qpair failed and we were unable to recover it. 00:25:57.986 [2024-12-11 15:02:40.542334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.986 [2024-12-11 15:02:40.542360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.986 qpair failed and we were unable to recover it. 00:25:57.986 [2024-12-11 15:02:40.542471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.986 [2024-12-11 15:02:40.542496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.986 qpair failed and we were unable to recover it. 00:25:57.986 [2024-12-11 15:02:40.542669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.986 [2024-12-11 15:02:40.542698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.986 qpair failed and we were unable to recover it. 00:25:57.986 [2024-12-11 15:02:40.542818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.986 [2024-12-11 15:02:40.542851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.986 qpair failed and we were unable to recover it. 00:25:57.986 [2024-12-11 15:02:40.542942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.986 [2024-12-11 15:02:40.542969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.986 qpair failed and we were unable to recover it. 00:25:57.986 [2024-12-11 15:02:40.543090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.986 [2024-12-11 15:02:40.543119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.986 qpair failed and we were unable to recover it. 00:25:57.986 [2024-12-11 15:02:40.543246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.986 [2024-12-11 15:02:40.543272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.986 qpair failed and we were unable to recover it. 00:25:57.986 [2024-12-11 15:02:40.543356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.986 [2024-12-11 15:02:40.543382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.986 qpair failed and we were unable to recover it. 00:25:57.986 [2024-12-11 15:02:40.543524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.986 [2024-12-11 15:02:40.543558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.986 qpair failed and we were unable to recover it. 00:25:57.986 [2024-12-11 15:02:40.543690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.986 [2024-12-11 15:02:40.543719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.986 qpair failed and we were unable to recover it. 00:25:57.986 [2024-12-11 15:02:40.543835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.986 [2024-12-11 15:02:40.543878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.986 qpair failed and we were unable to recover it. 00:25:57.986 [2024-12-11 15:02:40.543987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.986 [2024-12-11 15:02:40.544013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.987 qpair failed and we were unable to recover it. 00:25:57.987 [2024-12-11 15:02:40.544127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.987 [2024-12-11 15:02:40.544153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.987 qpair failed and we were unable to recover it. 00:25:57.987 [2024-12-11 15:02:40.544268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.987 [2024-12-11 15:02:40.544295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.987 qpair failed and we were unable to recover it. 00:25:57.987 [2024-12-11 15:02:40.544440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.987 [2024-12-11 15:02:40.544466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.987 qpair failed and we were unable to recover it. 00:25:57.987 [2024-12-11 15:02:40.544608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.987 [2024-12-11 15:02:40.544635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.987 qpair failed and we were unable to recover it. 00:25:57.987 [2024-12-11 15:02:40.544717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.987 [2024-12-11 15:02:40.544743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.987 qpair failed and we were unable to recover it. 00:25:57.987 [2024-12-11 15:02:40.544827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.987 [2024-12-11 15:02:40.544853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.987 qpair failed and we were unable to recover it. 00:25:57.987 [2024-12-11 15:02:40.544941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.987 [2024-12-11 15:02:40.544966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.987 qpair failed and we were unable to recover it. 00:25:57.987 [2024-12-11 15:02:40.545090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.987 [2024-12-11 15:02:40.545116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.987 qpair failed and we were unable to recover it. 00:25:57.987 [2024-12-11 15:02:40.545222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.987 [2024-12-11 15:02:40.545248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.987 qpair failed and we were unable to recover it. 00:25:57.987 [2024-12-11 15:02:40.545377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.987 [2024-12-11 15:02:40.545413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.987 qpair failed and we were unable to recover it. 00:25:57.987 [2024-12-11 15:02:40.545504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.987 [2024-12-11 15:02:40.545532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.987 qpair failed and we were unable to recover it. 00:25:57.987 [2024-12-11 15:02:40.545662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.987 [2024-12-11 15:02:40.545688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.987 qpair failed and we were unable to recover it. 00:25:57.987 [2024-12-11 15:02:40.545803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.987 [2024-12-11 15:02:40.545829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.987 qpair failed and we were unable to recover it. 00:25:57.987 [2024-12-11 15:02:40.545918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.987 [2024-12-11 15:02:40.545944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.987 qpair failed and we were unable to recover it. 00:25:57.987 [2024-12-11 15:02:40.546082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.987 [2024-12-11 15:02:40.546107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.987 qpair failed and we were unable to recover it. 00:25:57.987 [2024-12-11 15:02:40.546188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.987 [2024-12-11 15:02:40.546213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.987 qpair failed and we were unable to recover it. 00:25:57.987 [2024-12-11 15:02:40.546313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.987 [2024-12-11 15:02:40.546352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.987 qpair failed and we were unable to recover it. 00:25:57.987 [2024-12-11 15:02:40.546450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.987 [2024-12-11 15:02:40.546477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.987 qpair failed and we were unable to recover it. 00:25:57.987 [2024-12-11 15:02:40.546562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.987 [2024-12-11 15:02:40.546590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.987 qpair failed and we were unable to recover it. 00:25:57.987 [2024-12-11 15:02:40.546704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.987 [2024-12-11 15:02:40.546729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.987 qpair failed and we were unable to recover it. 00:25:57.987 [2024-12-11 15:02:40.546842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.987 [2024-12-11 15:02:40.546868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.987 qpair failed and we were unable to recover it. 00:25:57.987 [2024-12-11 15:02:40.546948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.987 [2024-12-11 15:02:40.546973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.987 qpair failed and we were unable to recover it. 00:25:57.987 [2024-12-11 15:02:40.547051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.987 [2024-12-11 15:02:40.547078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.987 qpair failed and we were unable to recover it. 00:25:57.987 [2024-12-11 15:02:40.547196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.987 [2024-12-11 15:02:40.547221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.987 qpair failed and we were unable to recover it. 00:25:57.987 [2024-12-11 15:02:40.547316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.987 [2024-12-11 15:02:40.547343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.987 qpair failed and we were unable to recover it. 00:25:57.987 [2024-12-11 15:02:40.547458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.987 [2024-12-11 15:02:40.547484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.987 qpair failed and we were unable to recover it. 00:25:57.987 [2024-12-11 15:02:40.547600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.987 [2024-12-11 15:02:40.547627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.987 qpair failed and we were unable to recover it. 00:25:57.987 [2024-12-11 15:02:40.547714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.987 [2024-12-11 15:02:40.547741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.987 qpair failed and we were unable to recover it. 00:25:57.987 [2024-12-11 15:02:40.547849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.987 [2024-12-11 15:02:40.547876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.987 qpair failed and we were unable to recover it. 00:25:57.987 [2024-12-11 15:02:40.547996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.987 [2024-12-11 15:02:40.548022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.987 qpair failed and we were unable to recover it. 00:25:57.987 [2024-12-11 15:02:40.548106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.987 [2024-12-11 15:02:40.548136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.987 qpair failed and we were unable to recover it. 00:25:57.987 [2024-12-11 15:02:40.548262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.987 [2024-12-11 15:02:40.548293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.987 qpair failed and we were unable to recover it. 00:25:57.987 [2024-12-11 15:02:40.548410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.987 [2024-12-11 15:02:40.548436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.987 qpair failed and we were unable to recover it. 00:25:57.987 [2024-12-11 15:02:40.548522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.987 [2024-12-11 15:02:40.548557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.987 qpair failed and we were unable to recover it. 00:25:57.987 [2024-12-11 15:02:40.548656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.987 [2024-12-11 15:02:40.548682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.987 qpair failed and we were unable to recover it. 00:25:57.987 [2024-12-11 15:02:40.548766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.987 [2024-12-11 15:02:40.548792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.987 qpair failed and we were unable to recover it. 00:25:57.987 [2024-12-11 15:02:40.548882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.987 [2024-12-11 15:02:40.548908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.987 qpair failed and we were unable to recover it. 00:25:57.987 [2024-12-11 15:02:40.549026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.987 [2024-12-11 15:02:40.549052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.987 qpair failed and we were unable to recover it. 00:25:57.987 [2024-12-11 15:02:40.549187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.987 [2024-12-11 15:02:40.549214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.987 qpair failed and we were unable to recover it. 00:25:57.988 [2024-12-11 15:02:40.549321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.988 [2024-12-11 15:02:40.549348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.988 qpair failed and we were unable to recover it. 00:25:57.988 [2024-12-11 15:02:40.549446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.988 [2024-12-11 15:02:40.549474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.988 qpair failed and we were unable to recover it. 00:25:57.988 [2024-12-11 15:02:40.549568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.988 [2024-12-11 15:02:40.549597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.988 qpair failed and we were unable to recover it. 00:25:57.988 [2024-12-11 15:02:40.549708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.988 [2024-12-11 15:02:40.549734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.988 qpair failed and we were unable to recover it. 00:25:57.988 [2024-12-11 15:02:40.549856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.988 [2024-12-11 15:02:40.549882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.988 qpair failed and we were unable to recover it. 00:25:57.988 [2024-12-11 15:02:40.549965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.988 [2024-12-11 15:02:40.549990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.988 qpair failed and we were unable to recover it. 00:25:57.988 [2024-12-11 15:02:40.550096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.988 [2024-12-11 15:02:40.550135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.988 qpair failed and we were unable to recover it. 00:25:57.988 [2024-12-11 15:02:40.550262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.988 [2024-12-11 15:02:40.550289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.988 qpair failed and we were unable to recover it. 00:25:57.988 [2024-12-11 15:02:40.550405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.988 [2024-12-11 15:02:40.550431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.988 qpair failed and we were unable to recover it. 00:25:57.988 [2024-12-11 15:02:40.550520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.988 [2024-12-11 15:02:40.550553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.988 qpair failed and we were unable to recover it. 00:25:57.988 [2024-12-11 15:02:40.550669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.988 [2024-12-11 15:02:40.550695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.988 qpair failed and we were unable to recover it. 00:25:57.988 [2024-12-11 15:02:40.550776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.988 [2024-12-11 15:02:40.550801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.988 qpair failed and we were unable to recover it. 00:25:57.988 [2024-12-11 15:02:40.550881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.988 [2024-12-11 15:02:40.550907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.988 qpair failed and we were unable to recover it. 00:25:57.988 [2024-12-11 15:02:40.551023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.988 [2024-12-11 15:02:40.551049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.988 qpair failed and we were unable to recover it. 00:25:57.988 [2024-12-11 15:02:40.551155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.988 [2024-12-11 15:02:40.551181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.988 qpair failed and we were unable to recover it. 00:25:57.988 [2024-12-11 15:02:40.551270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.988 [2024-12-11 15:02:40.551298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.988 qpair failed and we were unable to recover it. 00:25:57.988 [2024-12-11 15:02:40.551460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.988 [2024-12-11 15:02:40.551499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.988 qpair failed and we were unable to recover it. 00:25:57.988 [2024-12-11 15:02:40.551605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.988 [2024-12-11 15:02:40.551633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.988 qpair failed and we were unable to recover it. 00:25:57.988 [2024-12-11 15:02:40.551752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.988 [2024-12-11 15:02:40.551780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.988 qpair failed and we were unable to recover it. 00:25:57.988 [2024-12-11 15:02:40.551874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.988 [2024-12-11 15:02:40.551901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.988 qpair failed and we were unable to recover it. 00:25:57.988 [2024-12-11 15:02:40.552014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.988 [2024-12-11 15:02:40.552040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.988 qpair failed and we were unable to recover it. 00:25:57.988 [2024-12-11 15:02:40.552159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.988 [2024-12-11 15:02:40.552187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.988 qpair failed and we were unable to recover it. 00:25:57.988 [2024-12-11 15:02:40.552277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.988 [2024-12-11 15:02:40.552304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.988 qpair failed and we were unable to recover it. 00:25:57.988 [2024-12-11 15:02:40.552414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.988 [2024-12-11 15:02:40.552440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.988 qpair failed and we were unable to recover it. 00:25:57.988 [2024-12-11 15:02:40.552560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.988 [2024-12-11 15:02:40.552587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.988 qpair failed and we were unable to recover it. 00:25:57.988 [2024-12-11 15:02:40.552699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.988 [2024-12-11 15:02:40.552724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.988 qpair failed and we were unable to recover it. 00:25:57.988 [2024-12-11 15:02:40.552804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.988 [2024-12-11 15:02:40.552830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.988 qpair failed and we were unable to recover it. 00:25:57.988 [2024-12-11 15:02:40.552941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.988 [2024-12-11 15:02:40.552968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.988 qpair failed and we were unable to recover it. 00:25:57.988 [2024-12-11 15:02:40.553091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.988 [2024-12-11 15:02:40.553117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.988 qpair failed and we were unable to recover it. 00:25:57.988 [2024-12-11 15:02:40.553235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.988 [2024-12-11 15:02:40.553261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.988 qpair failed and we were unable to recover it. 00:25:57.988 [2024-12-11 15:02:40.553380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.988 [2024-12-11 15:02:40.553405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.988 qpair failed and we were unable to recover it. 00:25:57.988 [2024-12-11 15:02:40.553530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.988 [2024-12-11 15:02:40.553564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.988 qpair failed and we were unable to recover it. 00:25:57.988 [2024-12-11 15:02:40.553651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.988 [2024-12-11 15:02:40.553682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.988 qpair failed and we were unable to recover it. 00:25:57.988 [2024-12-11 15:02:40.553802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.988 [2024-12-11 15:02:40.553827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.988 qpair failed and we were unable to recover it. 00:25:57.988 [2024-12-11 15:02:40.553942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.988 [2024-12-11 15:02:40.553968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.988 qpair failed and we were unable to recover it. 00:25:57.988 [2024-12-11 15:02:40.554059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.988 [2024-12-11 15:02:40.554084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.988 qpair failed and we were unable to recover it. 00:25:57.988 [2024-12-11 15:02:40.554172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.988 [2024-12-11 15:02:40.554198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.988 qpair failed and we were unable to recover it. 00:25:57.988 [2024-12-11 15:02:40.554283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.988 [2024-12-11 15:02:40.554309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.988 qpair failed and we were unable to recover it. 00:25:57.988 [2024-12-11 15:02:40.554405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.988 [2024-12-11 15:02:40.554432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.988 qpair failed and we were unable to recover it. 00:25:57.989 [2024-12-11 15:02:40.554543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.989 [2024-12-11 15:02:40.554577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.989 qpair failed and we were unable to recover it. 00:25:57.989 [2024-12-11 15:02:40.554694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.989 [2024-12-11 15:02:40.554721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.989 qpair failed and we were unable to recover it. 00:25:57.989 [2024-12-11 15:02:40.554868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.989 [2024-12-11 15:02:40.554894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.989 qpair failed and we were unable to recover it. 00:25:57.989 [2024-12-11 15:02:40.555008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.989 [2024-12-11 15:02:40.555034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.989 qpair failed and we were unable to recover it. 00:25:57.989 [2024-12-11 15:02:40.555174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.989 [2024-12-11 15:02:40.555200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.989 qpair failed and we were unable to recover it. 00:25:57.989 [2024-12-11 15:02:40.555312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.989 [2024-12-11 15:02:40.555339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.989 qpair failed and we were unable to recover it. 00:25:57.989 [2024-12-11 15:02:40.555449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.989 [2024-12-11 15:02:40.555474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.989 qpair failed and we were unable to recover it. 00:25:57.989 [2024-12-11 15:02:40.555587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.989 [2024-12-11 15:02:40.555628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.989 qpair failed and we were unable to recover it. 00:25:57.989 [2024-12-11 15:02:40.555720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.989 [2024-12-11 15:02:40.555747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.989 qpair failed and we were unable to recover it. 00:25:57.989 [2024-12-11 15:02:40.555864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.989 [2024-12-11 15:02:40.555890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.989 qpair failed and we were unable to recover it. 00:25:57.989 [2024-12-11 15:02:40.555998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.989 [2024-12-11 15:02:40.556023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.989 qpair failed and we were unable to recover it. 00:25:57.989 [2024-12-11 15:02:40.556133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.989 [2024-12-11 15:02:40.556158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.989 qpair failed and we were unable to recover it. 00:25:57.989 [2024-12-11 15:02:40.556266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.989 [2024-12-11 15:02:40.556292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.989 qpair failed and we were unable to recover it. 00:25:57.989 [2024-12-11 15:02:40.556408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.989 [2024-12-11 15:02:40.556434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.989 qpair failed and we were unable to recover it. 00:25:57.989 [2024-12-11 15:02:40.556526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.989 [2024-12-11 15:02:40.556564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.989 qpair failed and we were unable to recover it. 00:25:57.989 [2024-12-11 15:02:40.556657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.989 [2024-12-11 15:02:40.556683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.989 qpair failed and we were unable to recover it. 00:25:57.989 [2024-12-11 15:02:40.556794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.989 [2024-12-11 15:02:40.556820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.989 qpair failed and we were unable to recover it. 00:25:57.989 [2024-12-11 15:02:40.556902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.989 [2024-12-11 15:02:40.556928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.989 qpair failed and we were unable to recover it. 00:25:57.989 [2024-12-11 15:02:40.557011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.989 [2024-12-11 15:02:40.557038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.989 qpair failed and we were unable to recover it. 00:25:57.989 [2024-12-11 15:02:40.557152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.989 [2024-12-11 15:02:40.557178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.989 qpair failed and we were unable to recover it. 00:25:57.989 [2024-12-11 15:02:40.557269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.989 [2024-12-11 15:02:40.557300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.989 qpair failed and we were unable to recover it. 00:25:57.989 [2024-12-11 15:02:40.557441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.989 [2024-12-11 15:02:40.557467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.989 qpair failed and we were unable to recover it. 00:25:57.989 [2024-12-11 15:02:40.557560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.989 [2024-12-11 15:02:40.557589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.989 qpair failed and we were unable to recover it. 00:25:57.989 [2024-12-11 15:02:40.557675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.989 [2024-12-11 15:02:40.557701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.989 qpair failed and we were unable to recover it. 00:25:57.989 [2024-12-11 15:02:40.557814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.989 [2024-12-11 15:02:40.557840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.989 qpair failed and we were unable to recover it. 00:25:57.989 [2024-12-11 15:02:40.557931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.989 [2024-12-11 15:02:40.557975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.989 qpair failed and we were unable to recover it. 00:25:57.989 [2024-12-11 15:02:40.558111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.989 [2024-12-11 15:02:40.558137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.989 qpair failed and we were unable to recover it. 00:25:57.989 [2024-12-11 15:02:40.558220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.989 [2024-12-11 15:02:40.558247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.989 qpair failed and we were unable to recover it. 00:25:57.989 [2024-12-11 15:02:40.558336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.989 [2024-12-11 15:02:40.558362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.989 qpair failed and we were unable to recover it. 00:25:57.989 [2024-12-11 15:02:40.558468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.989 [2024-12-11 15:02:40.558494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.989 qpair failed and we were unable to recover it. 00:25:57.989 [2024-12-11 15:02:40.558578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.989 [2024-12-11 15:02:40.558605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.989 qpair failed and we were unable to recover it. 00:25:57.989 [2024-12-11 15:02:40.558691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.989 [2024-12-11 15:02:40.558716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.989 qpair failed and we were unable to recover it. 00:25:57.989 [2024-12-11 15:02:40.558800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.989 [2024-12-11 15:02:40.558826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.989 qpair failed and we were unable to recover it. 00:25:57.989 [2024-12-11 15:02:40.558920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.989 [2024-12-11 15:02:40.558945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.989 qpair failed and we were unable to recover it. 00:25:57.989 [2024-12-11 15:02:40.559069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.989 [2024-12-11 15:02:40.559096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.989 qpair failed and we were unable to recover it. 00:25:57.989 [2024-12-11 15:02:40.559208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.989 [2024-12-11 15:02:40.559235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.989 qpair failed and we were unable to recover it. 00:25:57.989 [2024-12-11 15:02:40.559321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.989 [2024-12-11 15:02:40.559346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.989 qpair failed and we were unable to recover it. 00:25:57.989 [2024-12-11 15:02:40.559461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.989 [2024-12-11 15:02:40.559488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.989 qpair failed and we were unable to recover it. 00:25:57.989 [2024-12-11 15:02:40.559604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.989 [2024-12-11 15:02:40.559631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.989 qpair failed and we were unable to recover it. 00:25:57.989 [2024-12-11 15:02:40.559716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.990 [2024-12-11 15:02:40.559742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.990 qpair failed and we were unable to recover it. 00:25:57.990 [2024-12-11 15:02:40.559825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.990 [2024-12-11 15:02:40.559851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.990 qpair failed and we were unable to recover it. 00:25:57.990 [2024-12-11 15:02:40.559963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.990 [2024-12-11 15:02:40.559988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.990 qpair failed and we were unable to recover it. 00:25:57.990 [2024-12-11 15:02:40.560067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.990 [2024-12-11 15:02:40.560091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.990 qpair failed and we were unable to recover it. 00:25:57.990 [2024-12-11 15:02:40.560232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.990 [2024-12-11 15:02:40.560257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.990 qpair failed and we were unable to recover it. 00:25:57.990 [2024-12-11 15:02:40.560369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.990 [2024-12-11 15:02:40.560396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.990 qpair failed and we were unable to recover it. 00:25:57.990 [2024-12-11 15:02:40.560479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.990 [2024-12-11 15:02:40.560503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.990 qpair failed and we were unable to recover it. 00:25:57.990 [2024-12-11 15:02:40.560621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.990 [2024-12-11 15:02:40.560648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.990 qpair failed and we were unable to recover it. 00:25:57.990 [2024-12-11 15:02:40.560772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.990 [2024-12-11 15:02:40.560799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.990 qpair failed and we were unable to recover it. 00:25:57.990 [2024-12-11 15:02:40.560882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.990 [2024-12-11 15:02:40.560908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.990 qpair failed and we were unable to recover it. 00:25:57.990 [2024-12-11 15:02:40.561018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.990 [2024-12-11 15:02:40.561043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.990 qpair failed and we were unable to recover it. 00:25:57.990 [2024-12-11 15:02:40.561162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.990 [2024-12-11 15:02:40.561188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.990 qpair failed and we were unable to recover it. 00:25:57.990 [2024-12-11 15:02:40.561327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.990 [2024-12-11 15:02:40.561353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.990 qpair failed and we were unable to recover it. 00:25:57.990 [2024-12-11 15:02:40.561438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.990 [2024-12-11 15:02:40.561464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.990 qpair failed and we were unable to recover it. 00:25:57.990 [2024-12-11 15:02:40.561577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.990 [2024-12-11 15:02:40.561603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.990 qpair failed and we were unable to recover it. 00:25:57.990 [2024-12-11 15:02:40.561716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.990 [2024-12-11 15:02:40.561742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.990 qpair failed and we were unable to recover it. 00:25:57.990 [2024-12-11 15:02:40.561856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.990 [2024-12-11 15:02:40.561882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.990 qpair failed and we were unable to recover it. 00:25:57.990 [2024-12-11 15:02:40.562025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.990 [2024-12-11 15:02:40.562051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.990 qpair failed and we were unable to recover it. 00:25:57.990 [2024-12-11 15:02:40.562140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.990 [2024-12-11 15:02:40.562167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.990 qpair failed and we were unable to recover it. 00:25:57.990 [2024-12-11 15:02:40.562278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.990 [2024-12-11 15:02:40.562303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.990 qpair failed and we were unable to recover it. 00:25:57.990 [2024-12-11 15:02:40.562452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.990 [2024-12-11 15:02:40.562478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.990 qpair failed and we were unable to recover it. 00:25:57.990 [2024-12-11 15:02:40.562565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.990 [2024-12-11 15:02:40.562617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.990 qpair failed and we were unable to recover it. 00:25:57.990 [2024-12-11 15:02:40.562723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.990 [2024-12-11 15:02:40.562749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.990 qpair failed and we were unable to recover it. 00:25:57.990 [2024-12-11 15:02:40.562863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.990 [2024-12-11 15:02:40.562889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.990 qpair failed and we were unable to recover it. 00:25:57.990 [2024-12-11 15:02:40.562982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.990 [2024-12-11 15:02:40.563009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.990 qpair failed and we were unable to recover it. 00:25:57.990 [2024-12-11 15:02:40.563120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.990 [2024-12-11 15:02:40.563146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.990 qpair failed and we were unable to recover it. 00:25:57.990 [2024-12-11 15:02:40.563254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.990 [2024-12-11 15:02:40.563280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.990 qpair failed and we were unable to recover it. 00:25:57.990 [2024-12-11 15:02:40.563391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.990 [2024-12-11 15:02:40.563417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.990 qpair failed and we were unable to recover it. 00:25:57.990 [2024-12-11 15:02:40.563530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.990 [2024-12-11 15:02:40.563564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.990 qpair failed and we were unable to recover it. 00:25:57.990 [2024-12-11 15:02:40.563644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.990 [2024-12-11 15:02:40.563670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.990 qpair failed and we were unable to recover it. 00:25:57.990 [2024-12-11 15:02:40.563755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.990 [2024-12-11 15:02:40.563780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.990 qpair failed and we were unable to recover it. 00:25:57.990 [2024-12-11 15:02:40.563884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.990 [2024-12-11 15:02:40.563910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.990 qpair failed and we were unable to recover it. 00:25:57.990 [2024-12-11 15:02:40.564001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.990 [2024-12-11 15:02:40.564026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.990 qpair failed and we were unable to recover it. 00:25:57.990 [2024-12-11 15:02:40.564115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.990 [2024-12-11 15:02:40.564141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.990 qpair failed and we were unable to recover it. 00:25:57.990 [2024-12-11 15:02:40.564241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.990 [2024-12-11 15:02:40.564266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.990 qpair failed and we were unable to recover it. 00:25:57.990 [2024-12-11 15:02:40.564386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.991 [2024-12-11 15:02:40.564413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.991 qpair failed and we were unable to recover it. 00:25:57.991 [2024-12-11 15:02:40.564501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.991 [2024-12-11 15:02:40.564527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.991 qpair failed and we were unable to recover it. 00:25:57.991 [2024-12-11 15:02:40.564656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.991 [2024-12-11 15:02:40.564683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.991 qpair failed and we were unable to recover it. 00:25:57.991 [2024-12-11 15:02:40.564825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.991 [2024-12-11 15:02:40.564850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.991 qpair failed and we were unable to recover it. 00:25:57.991 [2024-12-11 15:02:40.564940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.991 [2024-12-11 15:02:40.564966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.991 qpair failed and we were unable to recover it. 00:25:57.991 [2024-12-11 15:02:40.565076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.991 [2024-12-11 15:02:40.565101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.991 qpair failed and we were unable to recover it. 00:25:57.991 [2024-12-11 15:02:40.565181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.991 [2024-12-11 15:02:40.565208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.991 qpair failed and we were unable to recover it. 00:25:57.991 [2024-12-11 15:02:40.565322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.991 [2024-12-11 15:02:40.565348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.991 qpair failed and we were unable to recover it. 00:25:57.991 [2024-12-11 15:02:40.565461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.991 [2024-12-11 15:02:40.565487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.991 qpair failed and we were unable to recover it. 00:25:57.991 [2024-12-11 15:02:40.565613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.991 [2024-12-11 15:02:40.565639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.991 qpair failed and we were unable to recover it. 00:25:57.991 [2024-12-11 15:02:40.565731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.991 [2024-12-11 15:02:40.565757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.991 qpair failed and we were unable to recover it. 00:25:57.991 [2024-12-11 15:02:40.565837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.991 [2024-12-11 15:02:40.565863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.991 qpair failed and we were unable to recover it. 00:25:57.991 [2024-12-11 15:02:40.565976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.991 [2024-12-11 15:02:40.566002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.991 qpair failed and we were unable to recover it. 00:25:57.991 [2024-12-11 15:02:40.566088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.991 [2024-12-11 15:02:40.566114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.991 qpair failed and we were unable to recover it. 00:25:57.991 [2024-12-11 15:02:40.566205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.991 [2024-12-11 15:02:40.566231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.991 qpair failed and we were unable to recover it. 00:25:57.991 [2024-12-11 15:02:40.566318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.991 [2024-12-11 15:02:40.566344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.991 qpair failed and we were unable to recover it. 00:25:57.991 [2024-12-11 15:02:40.566482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.991 [2024-12-11 15:02:40.566507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.991 qpair failed and we were unable to recover it. 00:25:57.991 [2024-12-11 15:02:40.566591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.991 [2024-12-11 15:02:40.566617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.991 qpair failed and we were unable to recover it. 00:25:57.991 [2024-12-11 15:02:40.566705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.991 [2024-12-11 15:02:40.566730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.991 qpair failed and we were unable to recover it. 00:25:57.991 [2024-12-11 15:02:40.566813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.991 [2024-12-11 15:02:40.566838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.991 qpair failed and we were unable to recover it. 00:25:57.991 [2024-12-11 15:02:40.566974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.991 [2024-12-11 15:02:40.567000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.991 qpair failed and we were unable to recover it. 00:25:57.991 [2024-12-11 15:02:40.567118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.991 [2024-12-11 15:02:40.567143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.991 qpair failed and we were unable to recover it. 00:25:57.991 [2024-12-11 15:02:40.567240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.991 [2024-12-11 15:02:40.567266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.991 qpair failed and we were unable to recover it. 00:25:57.991 [2024-12-11 15:02:40.567355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.991 [2024-12-11 15:02:40.567381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.991 qpair failed and we were unable to recover it. 00:25:57.991 [2024-12-11 15:02:40.567465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.991 [2024-12-11 15:02:40.567491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.991 qpair failed and we were unable to recover it. 00:25:57.991 [2024-12-11 15:02:40.567632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.991 [2024-12-11 15:02:40.567672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.991 qpair failed and we were unable to recover it. 00:25:57.991 [2024-12-11 15:02:40.567799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.991 [2024-12-11 15:02:40.567832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.991 qpair failed and we were unable to recover it. 00:25:57.991 [2024-12-11 15:02:40.567923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.991 [2024-12-11 15:02:40.567951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.991 qpair failed and we were unable to recover it. 00:25:57.991 [2024-12-11 15:02:40.568071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.991 [2024-12-11 15:02:40.568097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.991 qpair failed and we were unable to recover it. 00:25:57.991 [2024-12-11 15:02:40.568204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.991 [2024-12-11 15:02:40.568230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.991 qpair failed and we were unable to recover it. 00:25:57.991 [2024-12-11 15:02:40.568339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.991 [2024-12-11 15:02:40.568365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.991 qpair failed and we were unable to recover it. 00:25:57.991 [2024-12-11 15:02:40.568479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.991 [2024-12-11 15:02:40.568504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.991 qpair failed and we were unable to recover it. 00:25:57.991 [2024-12-11 15:02:40.568610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.991 [2024-12-11 15:02:40.568637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.991 qpair failed and we were unable to recover it. 00:25:57.991 [2024-12-11 15:02:40.568749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.991 [2024-12-11 15:02:40.568775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.991 qpair failed and we were unable to recover it. 00:25:57.991 [2024-12-11 15:02:40.568865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.991 [2024-12-11 15:02:40.568894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.991 qpair failed and we were unable to recover it. 00:25:57.991 [2024-12-11 15:02:40.569014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.991 [2024-12-11 15:02:40.569039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.991 qpair failed and we were unable to recover it. 00:25:57.991 [2024-12-11 15:02:40.569120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.991 [2024-12-11 15:02:40.569146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.991 qpair failed and we were unable to recover it. 00:25:57.991 [2024-12-11 15:02:40.569262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.991 [2024-12-11 15:02:40.569287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.991 qpair failed and we were unable to recover it. 00:25:57.991 [2024-12-11 15:02:40.569426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.991 [2024-12-11 15:02:40.569453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.991 qpair failed and we were unable to recover it. 00:25:57.991 [2024-12-11 15:02:40.569561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.992 [2024-12-11 15:02:40.569604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.992 qpair failed and we were unable to recover it. 00:25:57.992 [2024-12-11 15:02:40.569772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.992 [2024-12-11 15:02:40.569801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.992 qpair failed and we were unable to recover it. 00:25:57.992 [2024-12-11 15:02:40.569946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.992 [2024-12-11 15:02:40.569973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.992 qpair failed and we were unable to recover it. 00:25:57.992 [2024-12-11 15:02:40.570062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.992 [2024-12-11 15:02:40.570088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.992 qpair failed and we were unable to recover it. 00:25:57.992 [2024-12-11 15:02:40.570197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.992 [2024-12-11 15:02:40.570223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.992 qpair failed and we were unable to recover it. 00:25:57.992 [2024-12-11 15:02:40.570306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.992 [2024-12-11 15:02:40.570332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.992 qpair failed and we were unable to recover it. 00:25:57.992 [2024-12-11 15:02:40.570412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.992 [2024-12-11 15:02:40.570437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.992 qpair failed and we were unable to recover it. 00:25:57.992 [2024-12-11 15:02:40.570563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.992 [2024-12-11 15:02:40.570590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.992 qpair failed and we were unable to recover it. 00:25:57.992 [2024-12-11 15:02:40.570698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.992 [2024-12-11 15:02:40.570724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.992 qpair failed and we were unable to recover it. 00:25:57.992 [2024-12-11 15:02:40.570813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.992 [2024-12-11 15:02:40.570839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.992 qpair failed and we were unable to recover it. 00:25:57.992 [2024-12-11 15:02:40.570923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.992 [2024-12-11 15:02:40.570973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.992 qpair failed and we were unable to recover it. 00:25:57.992 [2024-12-11 15:02:40.571134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.992 [2024-12-11 15:02:40.571186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.992 qpair failed and we were unable to recover it. 00:25:57.992 [2024-12-11 15:02:40.571340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.992 [2024-12-11 15:02:40.571376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.992 qpair failed and we were unable to recover it. 00:25:57.992 [2024-12-11 15:02:40.571530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.992 [2024-12-11 15:02:40.571566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.992 qpair failed and we were unable to recover it. 00:25:57.992 [2024-12-11 15:02:40.571664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.992 [2024-12-11 15:02:40.571693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.992 qpair failed and we were unable to recover it. 00:25:57.992 [2024-12-11 15:02:40.571813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.992 [2024-12-11 15:02:40.571841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.992 qpair failed and we were unable to recover it. 00:25:57.992 [2024-12-11 15:02:40.571992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.992 [2024-12-11 15:02:40.572026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.992 qpair failed and we were unable to recover it. 00:25:57.992 [2024-12-11 15:02:40.572187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.992 [2024-12-11 15:02:40.572223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.992 qpair failed and we were unable to recover it. 00:25:57.992 [2024-12-11 15:02:40.572375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.992 [2024-12-11 15:02:40.572410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.992 qpair failed and we were unable to recover it. 00:25:57.992 [2024-12-11 15:02:40.572573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.992 [2024-12-11 15:02:40.572624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.992 qpair failed and we were unable to recover it. 00:25:57.992 [2024-12-11 15:02:40.572752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.992 [2024-12-11 15:02:40.572780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.992 qpair failed and we were unable to recover it. 00:25:57.992 [2024-12-11 15:02:40.572936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.992 [2024-12-11 15:02:40.572963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.992 qpair failed and we were unable to recover it. 00:25:57.992 [2024-12-11 15:02:40.573127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.992 [2024-12-11 15:02:40.573155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.992 qpair failed and we were unable to recover it. 00:25:57.992 [2024-12-11 15:02:40.573411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.992 [2024-12-11 15:02:40.573447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.992 qpair failed and we were unable to recover it. 00:25:57.992 [2024-12-11 15:02:40.573581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.992 [2024-12-11 15:02:40.573611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.992 qpair failed and we were unable to recover it. 00:25:57.992 [2024-12-11 15:02:40.573727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.992 [2024-12-11 15:02:40.573755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.992 qpair failed and we were unable to recover it. 00:25:57.992 [2024-12-11 15:02:40.573900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.992 [2024-12-11 15:02:40.573936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.992 qpair failed and we were unable to recover it. 00:25:57.992 [2024-12-11 15:02:40.574082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.992 [2024-12-11 15:02:40.574117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.992 qpair failed and we were unable to recover it. 00:25:57.992 [2024-12-11 15:02:40.574271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.992 [2024-12-11 15:02:40.574307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.992 qpair failed and we were unable to recover it. 00:25:57.992 [2024-12-11 15:02:40.574435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.992 [2024-12-11 15:02:40.574471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.992 qpair failed and we were unable to recover it. 00:25:57.992 [2024-12-11 15:02:40.574629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.992 [2024-12-11 15:02:40.574658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.992 qpair failed and we were unable to recover it. 00:25:57.992 [2024-12-11 15:02:40.574746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.992 [2024-12-11 15:02:40.574774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.992 qpair failed and we were unable to recover it. 00:25:57.992 [2024-12-11 15:02:40.574895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.992 [2024-12-11 15:02:40.574942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.992 qpair failed and we were unable to recover it. 00:25:57.992 [2024-12-11 15:02:40.575131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.992 [2024-12-11 15:02:40.575167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.992 qpair failed and we were unable to recover it. 00:25:57.992 [2024-12-11 15:02:40.575315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.992 [2024-12-11 15:02:40.575351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.992 qpair failed and we were unable to recover it. 00:25:57.992 [2024-12-11 15:02:40.575472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.992 [2024-12-11 15:02:40.575500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.992 qpair failed and we were unable to recover it. 00:25:57.992 [2024-12-11 15:02:40.575610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.992 [2024-12-11 15:02:40.575639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.992 qpair failed and we were unable to recover it. 00:25:57.992 [2024-12-11 15:02:40.575788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.992 [2024-12-11 15:02:40.575816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.992 qpair failed and we were unable to recover it. 00:25:57.992 [2024-12-11 15:02:40.575957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.992 [2024-12-11 15:02:40.575992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.992 qpair failed and we were unable to recover it. 00:25:57.992 [2024-12-11 15:02:40.576102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.993 [2024-12-11 15:02:40.576138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.993 qpair failed and we were unable to recover it. 00:25:57.993 [2024-12-11 15:02:40.576314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.993 [2024-12-11 15:02:40.576350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.993 qpair failed and we were unable to recover it. 00:25:57.993 [2024-12-11 15:02:40.576477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.993 [2024-12-11 15:02:40.576506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.993 qpair failed and we were unable to recover it. 00:25:57.993 [2024-12-11 15:02:40.576603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.993 [2024-12-11 15:02:40.576632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.993 qpair failed and we were unable to recover it. 00:25:57.993 [2024-12-11 15:02:40.579647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.993 [2024-12-11 15:02:40.579692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.993 qpair failed and we were unable to recover it. 00:25:57.993 [2024-12-11 15:02:40.579830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.993 [2024-12-11 15:02:40.579860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.993 qpair failed and we were unable to recover it. 00:25:57.993 [2024-12-11 15:02:40.579984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.993 [2024-12-11 15:02:40.580034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.993 qpair failed and we were unable to recover it. 00:25:57.993 [2024-12-11 15:02:40.580214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.993 [2024-12-11 15:02:40.580251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.993 qpair failed and we were unable to recover it. 00:25:57.993 [2024-12-11 15:02:40.580413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.993 [2024-12-11 15:02:40.580449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.993 qpair failed and we were unable to recover it. 00:25:57.993 [2024-12-11 15:02:40.580611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.993 [2024-12-11 15:02:40.580641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.993 qpair failed and we were unable to recover it. 00:25:57.993 [2024-12-11 15:02:40.580769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.993 [2024-12-11 15:02:40.580798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.993 qpair failed and we were unable to recover it. 00:25:57.993 [2024-12-11 15:02:40.580956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.993 [2024-12-11 15:02:40.580984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.993 qpair failed and we were unable to recover it. 00:25:57.993 [2024-12-11 15:02:40.581104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.993 [2024-12-11 15:02:40.581155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.993 qpair failed and we were unable to recover it. 00:25:57.993 [2024-12-11 15:02:40.581303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.993 [2024-12-11 15:02:40.581338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.993 qpair failed and we were unable to recover it. 00:25:57.993 [2024-12-11 15:02:40.581512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.993 [2024-12-11 15:02:40.581559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.993 qpair failed and we were unable to recover it. 00:25:57.993 [2024-12-11 15:02:40.581698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.993 [2024-12-11 15:02:40.581732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.993 qpair failed and we were unable to recover it. 00:25:57.993 [2024-12-11 15:02:40.581878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.993 [2024-12-11 15:02:40.581914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.993 qpair failed and we were unable to recover it. 00:25:57.993 [2024-12-11 15:02:40.582041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.993 [2024-12-11 15:02:40.582086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.993 qpair failed and we were unable to recover it. 00:25:57.993 [2024-12-11 15:02:40.582237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.993 [2024-12-11 15:02:40.582272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.993 qpair failed and we were unable to recover it. 00:25:57.993 [2024-12-11 15:02:40.582413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.993 [2024-12-11 15:02:40.582448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.993 qpair failed and we were unable to recover it. 00:25:57.993 [2024-12-11 15:02:40.582588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.993 [2024-12-11 15:02:40.582617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.993 qpair failed and we were unable to recover it. 00:25:57.993 [2024-12-11 15:02:40.582733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.993 [2024-12-11 15:02:40.582761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.993 qpair failed and we were unable to recover it. 00:25:57.993 [2024-12-11 15:02:40.582875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.993 [2024-12-11 15:02:40.582911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.993 qpair failed and we were unable to recover it. 00:25:57.993 [2024-12-11 15:02:40.583041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.993 [2024-12-11 15:02:40.583069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.993 qpair failed and we were unable to recover it. 00:25:57.993 [2024-12-11 15:02:40.583173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.993 [2024-12-11 15:02:40.583201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.993 qpair failed and we were unable to recover it. 00:25:57.993 [2024-12-11 15:02:40.583354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.993 [2024-12-11 15:02:40.583389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.993 qpair failed and we were unable to recover it. 00:25:57.993 [2024-12-11 15:02:40.583537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.993 [2024-12-11 15:02:40.583571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.993 qpair failed and we were unable to recover it. 00:25:57.993 [2024-12-11 15:02:40.583662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.993 [2024-12-11 15:02:40.583691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.993 qpair failed and we were unable to recover it. 00:25:57.993 [2024-12-11 15:02:40.583813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.993 [2024-12-11 15:02:40.583841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.993 qpair failed and we were unable to recover it. 00:25:57.993 [2024-12-11 15:02:40.583990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.993 [2024-12-11 15:02:40.584025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.993 qpair failed and we were unable to recover it. 00:25:57.993 [2024-12-11 15:02:40.584205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.993 [2024-12-11 15:02:40.584240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.993 qpair failed and we were unable to recover it. 00:25:57.993 [2024-12-11 15:02:40.584383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.993 [2024-12-11 15:02:40.584418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.993 qpair failed and we were unable to recover it. 00:25:57.993 [2024-12-11 15:02:40.584557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.993 [2024-12-11 15:02:40.584599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.993 qpair failed and we were unable to recover it. 00:25:57.993 [2024-12-11 15:02:40.584710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.993 [2024-12-11 15:02:40.584739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.993 qpair failed and we were unable to recover it. 00:25:57.993 [2024-12-11 15:02:40.584854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.993 [2024-12-11 15:02:40.584905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.993 qpair failed and we were unable to recover it. 00:25:57.993 [2024-12-11 15:02:40.585006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.993 [2024-12-11 15:02:40.585033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.993 qpair failed and we were unable to recover it. 00:25:57.993 [2024-12-11 15:02:40.585134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.993 [2024-12-11 15:02:40.585161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.993 qpair failed and we were unable to recover it. 00:25:57.993 [2024-12-11 15:02:40.585255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.993 [2024-12-11 15:02:40.585282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.993 qpair failed and we were unable to recover it. 00:25:57.993 [2024-12-11 15:02:40.585401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.993 [2024-12-11 15:02:40.585429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.993 qpair failed and we were unable to recover it. 00:25:57.993 [2024-12-11 15:02:40.585575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.993 [2024-12-11 15:02:40.585604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.994 qpair failed and we were unable to recover it. 00:25:57.994 [2024-12-11 15:02:40.585724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.994 [2024-12-11 15:02:40.585751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.994 qpair failed and we were unable to recover it. 00:25:57.994 [2024-12-11 15:02:40.585847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.994 [2024-12-11 15:02:40.585875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.994 qpair failed and we were unable to recover it. 00:25:57.994 [2024-12-11 15:02:40.586028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.994 [2024-12-11 15:02:40.586056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.994 qpair failed and we were unable to recover it. 00:25:57.994 [2024-12-11 15:02:40.586156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.994 [2024-12-11 15:02:40.586184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.994 qpair failed and we were unable to recover it. 00:25:57.994 [2024-12-11 15:02:40.586309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.994 [2024-12-11 15:02:40.586340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.994 qpair failed and we were unable to recover it. 00:25:57.994 [2024-12-11 15:02:40.586432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.994 [2024-12-11 15:02:40.586460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.994 qpair failed and we were unable to recover it. 00:25:57.994 [2024-12-11 15:02:40.586586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.994 [2024-12-11 15:02:40.586615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.994 qpair failed and we were unable to recover it. 00:25:57.994 [2024-12-11 15:02:40.586709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.994 [2024-12-11 15:02:40.586737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.994 qpair failed and we were unable to recover it. 00:25:57.994 [2024-12-11 15:02:40.586834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.994 [2024-12-11 15:02:40.586862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.994 qpair failed and we were unable to recover it. 00:25:57.994 [2024-12-11 15:02:40.586941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.994 [2024-12-11 15:02:40.586969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.994 qpair failed and we were unable to recover it. 00:25:57.994 [2024-12-11 15:02:40.587092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.994 [2024-12-11 15:02:40.587127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.994 qpair failed and we were unable to recover it. 00:25:57.994 [2024-12-11 15:02:40.587264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.994 [2024-12-11 15:02:40.587292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.994 qpair failed and we were unable to recover it. 00:25:57.994 [2024-12-11 15:02:40.587440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.994 [2024-12-11 15:02:40.587467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.994 qpair failed and we were unable to recover it. 00:25:57.994 [2024-12-11 15:02:40.587614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.994 [2024-12-11 15:02:40.587650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.994 qpair failed and we were unable to recover it. 00:25:57.994 [2024-12-11 15:02:40.587790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.994 [2024-12-11 15:02:40.587825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.994 qpair failed and we were unable to recover it. 00:25:57.994 [2024-12-11 15:02:40.587969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.994 [2024-12-11 15:02:40.588011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.994 qpair failed and we were unable to recover it. 00:25:57.994 [2024-12-11 15:02:40.588188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.994 [2024-12-11 15:02:40.588223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.994 qpair failed and we were unable to recover it. 00:25:57.994 [2024-12-11 15:02:40.588377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.994 [2024-12-11 15:02:40.588413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.994 qpair failed and we were unable to recover it. 00:25:57.994 [2024-12-11 15:02:40.588534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.994 [2024-12-11 15:02:40.588595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.994 qpair failed and we were unable to recover it. 00:25:57.994 [2024-12-11 15:02:40.588688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.994 [2024-12-11 15:02:40.588716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.994 qpair failed and we were unable to recover it. 00:25:57.994 [2024-12-11 15:02:40.588816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.994 [2024-12-11 15:02:40.588844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.994 qpair failed and we were unable to recover it. 00:25:57.994 [2024-12-11 15:02:40.589012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.994 [2024-12-11 15:02:40.589048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.994 qpair failed and we were unable to recover it. 00:25:57.994 [2024-12-11 15:02:40.589238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.994 [2024-12-11 15:02:40.589273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.994 qpair failed and we were unable to recover it. 00:25:57.994 [2024-12-11 15:02:40.589383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.994 [2024-12-11 15:02:40.589420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.994 qpair failed and we were unable to recover it. 00:25:57.994 [2024-12-11 15:02:40.589540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.994 [2024-12-11 15:02:40.589596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.994 qpair failed and we were unable to recover it. 00:25:57.994 [2024-12-11 15:02:40.589742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.994 [2024-12-11 15:02:40.589770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.994 qpair failed and we were unable to recover it. 00:25:57.994 [2024-12-11 15:02:40.589887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.994 [2024-12-11 15:02:40.589937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.994 qpair failed and we were unable to recover it. 00:25:57.994 [2024-12-11 15:02:40.590113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.994 [2024-12-11 15:02:40.590148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.994 qpair failed and we were unable to recover it. 00:25:57.994 [2024-12-11 15:02:40.590320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.994 [2024-12-11 15:02:40.590357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.994 qpair failed and we were unable to recover it. 00:25:57.994 [2024-12-11 15:02:40.590503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.994 [2024-12-11 15:02:40.590538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.994 qpair failed and we were unable to recover it. 00:25:57.994 [2024-12-11 15:02:40.590663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.994 [2024-12-11 15:02:40.590691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.994 qpair failed and we were unable to recover it. 00:25:57.994 [2024-12-11 15:02:40.590784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.994 [2024-12-11 15:02:40.590812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.994 qpair failed and we were unable to recover it. 00:25:57.994 [2024-12-11 15:02:40.590913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.994 [2024-12-11 15:02:40.590963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.994 qpair failed and we were unable to recover it. 00:25:57.994 [2024-12-11 15:02:40.591114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.994 [2024-12-11 15:02:40.591149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.994 qpair failed and we were unable to recover it. 00:25:57.994 [2024-12-11 15:02:40.591266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.995 [2024-12-11 15:02:40.591302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.995 qpair failed and we were unable to recover it. 00:25:57.995 [2024-12-11 15:02:40.591455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.995 [2024-12-11 15:02:40.591491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.995 qpair failed and we were unable to recover it. 00:25:57.995 [2024-12-11 15:02:40.591650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.995 [2024-12-11 15:02:40.591691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:57.995 qpair failed and we were unable to recover it. 00:25:57.995 [2024-12-11 15:02:40.591835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.995 [2024-12-11 15:02:40.591878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.995 qpair failed and we were unable to recover it. 00:25:57.995 [2024-12-11 15:02:40.592002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.995 [2024-12-11 15:02:40.592054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.995 qpair failed and we were unable to recover it. 00:25:57.995 [2024-12-11 15:02:40.592203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.995 [2024-12-11 15:02:40.592251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.995 qpair failed and we were unable to recover it. 00:25:57.995 [2024-12-11 15:02:40.592371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.995 [2024-12-11 15:02:40.592398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.995 qpair failed and we were unable to recover it. 00:25:57.995 [2024-12-11 15:02:40.592522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.995 [2024-12-11 15:02:40.592559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.995 qpair failed and we were unable to recover it. 00:25:57.995 [2024-12-11 15:02:40.592689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.995 [2024-12-11 15:02:40.592736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.995 qpair failed and we were unable to recover it. 00:25:57.995 [2024-12-11 15:02:40.592865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.995 [2024-12-11 15:02:40.592892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.995 qpair failed and we were unable to recover it. 00:25:57.995 [2024-12-11 15:02:40.593020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.995 [2024-12-11 15:02:40.593047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.995 qpair failed and we were unable to recover it. 00:25:57.995 [2024-12-11 15:02:40.593165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.995 [2024-12-11 15:02:40.593193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.995 qpair failed and we were unable to recover it. 00:25:57.995 [2024-12-11 15:02:40.593274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.995 [2024-12-11 15:02:40.593302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.995 qpair failed and we were unable to recover it. 00:25:57.995 [2024-12-11 15:02:40.593394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.995 [2024-12-11 15:02:40.593421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.995 qpair failed and we were unable to recover it. 00:25:57.995 [2024-12-11 15:02:40.593574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.995 [2024-12-11 15:02:40.593628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.995 qpair failed and we were unable to recover it. 00:25:57.995 [2024-12-11 15:02:40.593772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.995 [2024-12-11 15:02:40.593807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.995 qpair failed and we were unable to recover it. 00:25:57.995 [2024-12-11 15:02:40.593939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.995 [2024-12-11 15:02:40.593975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.995 qpair failed and we were unable to recover it. 00:25:57.995 [2024-12-11 15:02:40.594149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.995 [2024-12-11 15:02:40.594184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.995 qpair failed and we were unable to recover it. 00:25:57.995 [2024-12-11 15:02:40.594359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.995 [2024-12-11 15:02:40.594386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.995 qpair failed and we were unable to recover it. 00:25:57.995 [2024-12-11 15:02:40.594518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.995 [2024-12-11 15:02:40.594555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.995 qpair failed and we were unable to recover it. 00:25:57.995 [2024-12-11 15:02:40.594685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.995 [2024-12-11 15:02:40.594721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.995 qpair failed and we were unable to recover it. 00:25:57.995 [2024-12-11 15:02:40.594839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.995 [2024-12-11 15:02:40.594880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.995 qpair failed and we were unable to recover it. 00:25:57.995 [2024-12-11 15:02:40.595028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.995 [2024-12-11 15:02:40.595063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.995 qpair failed and we were unable to recover it. 00:25:57.995 [2024-12-11 15:02:40.595237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.995 [2024-12-11 15:02:40.595289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.995 qpair failed and we were unable to recover it. 00:25:57.995 [2024-12-11 15:02:40.595408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.995 [2024-12-11 15:02:40.595435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.995 qpair failed and we were unable to recover it. 00:25:57.995 [2024-12-11 15:02:40.595600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.995 [2024-12-11 15:02:40.595637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.995 qpair failed and we were unable to recover it. 00:25:57.995 [2024-12-11 15:02:40.595744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.995 [2024-12-11 15:02:40.595771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.995 qpair failed and we were unable to recover it. 00:25:57.995 [2024-12-11 15:02:40.595870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.995 [2024-12-11 15:02:40.595898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.995 qpair failed and we were unable to recover it. 00:25:57.995 [2024-12-11 15:02:40.596025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.995 [2024-12-11 15:02:40.596054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.995 qpair failed and we were unable to recover it. 00:25:57.995 [2024-12-11 15:02:40.596150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.995 [2024-12-11 15:02:40.596179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.995 qpair failed and we were unable to recover it. 00:25:57.995 [2024-12-11 15:02:40.596275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.995 [2024-12-11 15:02:40.596302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.995 qpair failed and we were unable to recover it. 00:25:57.995 [2024-12-11 15:02:40.596397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.995 [2024-12-11 15:02:40.596425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.995 qpair failed and we were unable to recover it. 00:25:57.995 [2024-12-11 15:02:40.596508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.995 [2024-12-11 15:02:40.596536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.995 qpair failed and we were unable to recover it. 00:25:57.995 [2024-12-11 15:02:40.596665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.995 [2024-12-11 15:02:40.596693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.995 qpair failed and we were unable to recover it. 00:25:57.995 [2024-12-11 15:02:40.596783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.995 [2024-12-11 15:02:40.596811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.995 qpair failed and we were unable to recover it. 00:25:57.995 [2024-12-11 15:02:40.596929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.995 [2024-12-11 15:02:40.596967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.995 qpair failed and we were unable to recover it. 00:25:57.995 [2024-12-11 15:02:40.597072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.995 [2024-12-11 15:02:40.597107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.995 qpair failed and we were unable to recover it. 00:25:57.995 [2024-12-11 15:02:40.597258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.995 [2024-12-11 15:02:40.597294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.995 qpair failed and we were unable to recover it. 00:25:57.995 [2024-12-11 15:02:40.597409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.995 [2024-12-11 15:02:40.597439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.995 qpair failed and we were unable to recover it. 00:25:57.995 [2024-12-11 15:02:40.597574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.996 [2024-12-11 15:02:40.597603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.996 qpair failed and we were unable to recover it. 00:25:57.996 [2024-12-11 15:02:40.597776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.996 [2024-12-11 15:02:40.597826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.996 qpair failed and we were unable to recover it. 00:25:57.996 [2024-12-11 15:02:40.597948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.996 [2024-12-11 15:02:40.598000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.996 qpair failed and we were unable to recover it. 00:25:57.996 [2024-12-11 15:02:40.598114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.996 [2024-12-11 15:02:40.598164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.996 qpair failed and we were unable to recover it. 00:25:57.996 [2024-12-11 15:02:40.598264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.996 [2024-12-11 15:02:40.598291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.996 qpair failed and we were unable to recover it. 00:25:57.996 [2024-12-11 15:02:40.598411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.996 [2024-12-11 15:02:40.598440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.996 qpair failed and we were unable to recover it. 00:25:57.996 [2024-12-11 15:02:40.598533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.996 [2024-12-11 15:02:40.598567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.996 qpair failed and we were unable to recover it. 00:25:57.996 [2024-12-11 15:02:40.598718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.996 [2024-12-11 15:02:40.598746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.996 qpair failed and we were unable to recover it. 00:25:57.996 [2024-12-11 15:02:40.598893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.996 [2024-12-11 15:02:40.598928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.996 qpair failed and we were unable to recover it. 00:25:57.996 [2024-12-11 15:02:40.599083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.996 [2024-12-11 15:02:40.599120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.996 qpair failed and we were unable to recover it. 00:25:57.996 [2024-12-11 15:02:40.599226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.996 [2024-12-11 15:02:40.599262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.996 qpair failed and we were unable to recover it. 00:25:57.996 [2024-12-11 15:02:40.599414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.996 [2024-12-11 15:02:40.599443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.996 qpair failed and we were unable to recover it. 00:25:57.996 [2024-12-11 15:02:40.599575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.996 [2024-12-11 15:02:40.599603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.996 qpair failed and we were unable to recover it. 00:25:57.996 [2024-12-11 15:02:40.599754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.996 [2024-12-11 15:02:40.599803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.996 qpair failed and we were unable to recover it. 00:25:57.996 [2024-12-11 15:02:40.599927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.996 [2024-12-11 15:02:40.599976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.996 qpair failed and we were unable to recover it. 00:25:57.996 [2024-12-11 15:02:40.600133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.996 [2024-12-11 15:02:40.600161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.996 qpair failed and we were unable to recover it. 00:25:57.996 [2024-12-11 15:02:40.600272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.996 [2024-12-11 15:02:40.600299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.996 qpair failed and we were unable to recover it. 00:25:57.996 [2024-12-11 15:02:40.600378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.996 [2024-12-11 15:02:40.600405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.996 qpair failed and we were unable to recover it. 00:25:57.996 [2024-12-11 15:02:40.600521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.996 [2024-12-11 15:02:40.600559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.996 qpair failed and we were unable to recover it. 00:25:57.996 [2024-12-11 15:02:40.600657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.996 [2024-12-11 15:02:40.600685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.996 qpair failed and we were unable to recover it. 00:25:57.996 [2024-12-11 15:02:40.600807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.996 [2024-12-11 15:02:40.600834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.996 qpair failed and we were unable to recover it. 00:25:57.996 [2024-12-11 15:02:40.600954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.996 [2024-12-11 15:02:40.600981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.996 qpair failed and we were unable to recover it. 00:25:57.996 [2024-12-11 15:02:40.601081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.996 [2024-12-11 15:02:40.601114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.996 qpair failed and we were unable to recover it. 00:25:57.996 [2024-12-11 15:02:40.601241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.996 [2024-12-11 15:02:40.601268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.996 qpair failed and we were unable to recover it. 00:25:57.996 [2024-12-11 15:02:40.601361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.996 [2024-12-11 15:02:40.601389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.996 qpair failed and we were unable to recover it. 00:25:57.996 [2024-12-11 15:02:40.601483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.996 [2024-12-11 15:02:40.601510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.996 qpair failed and we were unable to recover it. 00:25:57.996 [2024-12-11 15:02:40.601616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.996 [2024-12-11 15:02:40.601644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.996 qpair failed and we were unable to recover it. 00:25:57.996 [2024-12-11 15:02:40.601742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.996 [2024-12-11 15:02:40.601770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.996 qpair failed and we were unable to recover it. 00:25:57.996 [2024-12-11 15:02:40.601919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.996 [2024-12-11 15:02:40.601947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.996 qpair failed and we were unable to recover it. 00:25:57.996 [2024-12-11 15:02:40.602044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.996 [2024-12-11 15:02:40.602072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.996 qpair failed and we were unable to recover it. 00:25:57.996 [2024-12-11 15:02:40.602189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.996 [2024-12-11 15:02:40.602218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.996 qpair failed and we were unable to recover it. 00:25:57.996 [2024-12-11 15:02:40.602314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.996 [2024-12-11 15:02:40.602343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.996 qpair failed and we were unable to recover it. 00:25:57.996 [2024-12-11 15:02:40.602436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.996 [2024-12-11 15:02:40.602464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.996 qpair failed and we were unable to recover it. 00:25:57.996 [2024-12-11 15:02:40.602570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.996 [2024-12-11 15:02:40.602599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.996 qpair failed and we were unable to recover it. 00:25:57.996 [2024-12-11 15:02:40.602691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.996 [2024-12-11 15:02:40.602719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.996 qpair failed and we were unable to recover it. 00:25:57.996 [2024-12-11 15:02:40.602842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.996 [2024-12-11 15:02:40.602870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.996 qpair failed and we were unable to recover it. 00:25:57.996 [2024-12-11 15:02:40.603026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.996 [2024-12-11 15:02:40.603054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.996 qpair failed and we were unable to recover it. 00:25:57.996 [2024-12-11 15:02:40.603170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.996 [2024-12-11 15:02:40.603199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.996 qpair failed and we were unable to recover it. 00:25:57.996 [2024-12-11 15:02:40.603348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.996 [2024-12-11 15:02:40.603376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.996 qpair failed and we were unable to recover it. 00:25:57.996 [2024-12-11 15:02:40.603494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.997 [2024-12-11 15:02:40.603522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.997 qpair failed and we were unable to recover it. 00:25:57.997 [2024-12-11 15:02:40.603626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.997 [2024-12-11 15:02:40.603654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.997 qpair failed and we were unable to recover it. 00:25:57.997 [2024-12-11 15:02:40.603744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.997 [2024-12-11 15:02:40.603773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.997 qpair failed and we were unable to recover it. 00:25:57.997 [2024-12-11 15:02:40.603891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.997 [2024-12-11 15:02:40.603918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.997 qpair failed and we were unable to recover it. 00:25:57.997 [2024-12-11 15:02:40.604040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.997 [2024-12-11 15:02:40.604069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.997 qpair failed and we were unable to recover it. 00:25:57.997 [2024-12-11 15:02:40.604158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.997 [2024-12-11 15:02:40.604187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.997 qpair failed and we were unable to recover it. 00:25:57.997 [2024-12-11 15:02:40.604314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.997 [2024-12-11 15:02:40.604342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.997 qpair failed and we were unable to recover it. 00:25:57.997 [2024-12-11 15:02:40.604427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.997 [2024-12-11 15:02:40.604456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.997 qpair failed and we were unable to recover it. 00:25:57.997 [2024-12-11 15:02:40.604581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.997 [2024-12-11 15:02:40.604610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.997 qpair failed and we were unable to recover it. 00:25:57.997 [2024-12-11 15:02:40.604697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.997 [2024-12-11 15:02:40.604725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.997 qpair failed and we were unable to recover it. 00:25:57.997 [2024-12-11 15:02:40.604825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.997 [2024-12-11 15:02:40.604853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.997 qpair failed and we were unable to recover it. 00:25:57.997 [2024-12-11 15:02:40.604975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.997 [2024-12-11 15:02:40.605003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.997 qpair failed and we were unable to recover it. 00:25:57.997 [2024-12-11 15:02:40.605127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.997 [2024-12-11 15:02:40.605155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.997 qpair failed and we were unable to recover it. 00:25:57.997 [2024-12-11 15:02:40.605249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.997 [2024-12-11 15:02:40.605278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.997 qpair failed and we were unable to recover it. 00:25:57.997 [2024-12-11 15:02:40.605415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.997 [2024-12-11 15:02:40.605443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.997 qpair failed and we were unable to recover it. 00:25:57.997 [2024-12-11 15:02:40.605575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.997 [2024-12-11 15:02:40.605604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.997 qpair failed and we were unable to recover it. 00:25:57.997 [2024-12-11 15:02:40.605699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.997 [2024-12-11 15:02:40.605727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.997 qpair failed and we were unable to recover it. 00:25:57.997 [2024-12-11 15:02:40.605850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.997 [2024-12-11 15:02:40.605877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.997 qpair failed and we were unable to recover it. 00:25:57.997 [2024-12-11 15:02:40.605976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.997 [2024-12-11 15:02:40.606004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.997 qpair failed and we were unable to recover it. 00:25:57.997 [2024-12-11 15:02:40.606130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.997 [2024-12-11 15:02:40.606159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.997 qpair failed and we were unable to recover it. 00:25:57.997 [2024-12-11 15:02:40.606247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.997 [2024-12-11 15:02:40.606276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.997 qpair failed and we were unable to recover it. 00:25:57.997 [2024-12-11 15:02:40.606368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.997 [2024-12-11 15:02:40.606396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.997 qpair failed and we were unable to recover it. 00:25:57.997 [2024-12-11 15:02:40.606496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.997 [2024-12-11 15:02:40.606525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.997 qpair failed and we were unable to recover it. 00:25:57.997 [2024-12-11 15:02:40.606624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.997 [2024-12-11 15:02:40.606656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.997 qpair failed and we were unable to recover it. 00:25:57.997 [2024-12-11 15:02:40.606774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.997 [2024-12-11 15:02:40.606801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.997 qpair failed and we were unable to recover it. 00:25:57.997 [2024-12-11 15:02:40.606951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.997 [2024-12-11 15:02:40.606979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.997 qpair failed and we were unable to recover it. 00:25:57.997 [2024-12-11 15:02:40.607111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.997 [2024-12-11 15:02:40.607138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.997 qpair failed and we were unable to recover it. 00:25:57.997 [2024-12-11 15:02:40.607227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.997 [2024-12-11 15:02:40.607256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.997 qpair failed and we were unable to recover it. 00:25:57.997 [2024-12-11 15:02:40.607379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.997 [2024-12-11 15:02:40.607407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.997 qpair failed and we were unable to recover it. 00:25:57.997 [2024-12-11 15:02:40.607559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.997 [2024-12-11 15:02:40.607588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.997 qpair failed and we were unable to recover it. 00:25:57.997 [2024-12-11 15:02:40.607694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.997 [2024-12-11 15:02:40.607729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.997 qpair failed and we were unable to recover it. 00:25:57.997 [2024-12-11 15:02:40.607867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.997 [2024-12-11 15:02:40.607895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.997 qpair failed and we were unable to recover it. 00:25:57.997 [2024-12-11 15:02:40.608041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.997 [2024-12-11 15:02:40.608069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.997 qpair failed and we were unable to recover it. 00:25:57.997 [2024-12-11 15:02:40.608159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.997 [2024-12-11 15:02:40.608187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.997 qpair failed and we were unable to recover it. 00:25:57.997 [2024-12-11 15:02:40.608308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.997 [2024-12-11 15:02:40.608337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.997 qpair failed and we were unable to recover it. 00:25:57.997 [2024-12-11 15:02:40.608450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.997 [2024-12-11 15:02:40.608492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.997 qpair failed and we were unable to recover it. 00:25:57.997 [2024-12-11 15:02:40.608651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.997 [2024-12-11 15:02:40.608690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.997 qpair failed and we were unable to recover it. 00:25:57.997 [2024-12-11 15:02:40.608821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.997 [2024-12-11 15:02:40.608858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.997 qpair failed and we were unable to recover it. 00:25:57.997 [2024-12-11 15:02:40.609006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.997 [2024-12-11 15:02:40.609041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.997 qpair failed and we were unable to recover it. 00:25:57.997 [2024-12-11 15:02:40.609205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.998 [2024-12-11 15:02:40.609243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.998 qpair failed and we were unable to recover it. 00:25:57.998 [2024-12-11 15:02:40.609401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.998 [2024-12-11 15:02:40.609445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.998 qpair failed and we were unable to recover it. 00:25:57.998 [2024-12-11 15:02:40.609574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.998 [2024-12-11 15:02:40.609626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.998 qpair failed and we were unable to recover it. 00:25:57.998 [2024-12-11 15:02:40.609740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.998 [2024-12-11 15:02:40.609775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.998 qpair failed and we were unable to recover it. 00:25:57.998 [2024-12-11 15:02:40.609916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.998 [2024-12-11 15:02:40.609952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.998 qpair failed and we were unable to recover it. 00:25:57.998 [2024-12-11 15:02:40.610127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.998 [2024-12-11 15:02:40.610180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.998 qpair failed and we were unable to recover it. 00:25:57.998 [2024-12-11 15:02:40.610282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.998 [2024-12-11 15:02:40.610309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.998 qpair failed and we were unable to recover it. 00:25:57.998 [2024-12-11 15:02:40.610433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.998 [2024-12-11 15:02:40.610461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.998 qpair failed and we were unable to recover it. 00:25:57.998 [2024-12-11 15:02:40.610594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.998 [2024-12-11 15:02:40.610631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.998 qpair failed and we were unable to recover it. 00:25:57.998 [2024-12-11 15:02:40.610796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.998 [2024-12-11 15:02:40.610846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.998 qpair failed and we were unable to recover it. 00:25:57.998 [2024-12-11 15:02:40.610982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.998 [2024-12-11 15:02:40.611031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.998 qpair failed and we were unable to recover it. 00:25:57.998 [2024-12-11 15:02:40.611121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.998 [2024-12-11 15:02:40.611149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.998 qpair failed and we were unable to recover it. 00:25:57.998 [2024-12-11 15:02:40.611254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.998 [2024-12-11 15:02:40.611283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.998 qpair failed and we were unable to recover it. 00:25:57.998 [2024-12-11 15:02:40.611374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.998 [2024-12-11 15:02:40.611401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.998 qpair failed and we were unable to recover it. 00:25:57.998 [2024-12-11 15:02:40.611498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.998 [2024-12-11 15:02:40.611526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.998 qpair failed and we were unable to recover it. 00:25:57.998 [2024-12-11 15:02:40.611656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.998 [2024-12-11 15:02:40.611684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.998 qpair failed and we were unable to recover it. 00:25:57.998 [2024-12-11 15:02:40.611809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.998 [2024-12-11 15:02:40.611838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.998 qpair failed and we were unable to recover it. 00:25:57.998 [2024-12-11 15:02:40.611935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.998 [2024-12-11 15:02:40.611963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.998 qpair failed and we were unable to recover it. 00:25:57.998 [2024-12-11 15:02:40.612060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.998 [2024-12-11 15:02:40.612087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.998 qpair failed and we were unable to recover it. 00:25:57.998 [2024-12-11 15:02:40.612210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.998 [2024-12-11 15:02:40.612238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.998 qpair failed and we were unable to recover it. 00:25:57.998 [2024-12-11 15:02:40.612377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.998 [2024-12-11 15:02:40.612420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.998 qpair failed and we were unable to recover it. 00:25:57.998 [2024-12-11 15:02:40.612567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.998 [2024-12-11 15:02:40.612597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.998 qpair failed and we were unable to recover it. 00:25:57.998 [2024-12-11 15:02:40.612703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.998 [2024-12-11 15:02:40.612733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.998 qpair failed and we were unable to recover it. 00:25:57.998 [2024-12-11 15:02:40.612859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.998 [2024-12-11 15:02:40.612887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.998 qpair failed and we were unable to recover it. 00:25:57.998 [2024-12-11 15:02:40.613007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.998 [2024-12-11 15:02:40.613041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.998 qpair failed and we were unable to recover it. 00:25:57.998 [2024-12-11 15:02:40.613188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.998 [2024-12-11 15:02:40.613216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.998 qpair failed and we were unable to recover it. 00:25:57.998 [2024-12-11 15:02:40.613310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.998 [2024-12-11 15:02:40.613340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.998 qpair failed and we were unable to recover it. 00:25:57.998 [2024-12-11 15:02:40.613466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.998 [2024-12-11 15:02:40.613494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.998 qpair failed and we were unable to recover it. 00:25:57.998 [2024-12-11 15:02:40.613672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.998 [2024-12-11 15:02:40.613721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.998 qpair failed and we were unable to recover it. 00:25:57.998 [2024-12-11 15:02:40.613867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.998 [2024-12-11 15:02:40.613918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.998 qpair failed and we were unable to recover it. 00:25:57.998 [2024-12-11 15:02:40.614099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.998 [2024-12-11 15:02:40.614146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.998 qpair failed and we were unable to recover it. 00:25:57.998 [2024-12-11 15:02:40.614293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.998 [2024-12-11 15:02:40.614321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.998 qpair failed and we were unable to recover it. 00:25:57.998 [2024-12-11 15:02:40.614443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.998 [2024-12-11 15:02:40.614472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.998 qpair failed and we were unable to recover it. 00:25:57.998 [2024-12-11 15:02:40.614579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.998 [2024-12-11 15:02:40.614608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.998 qpair failed and we were unable to recover it. 00:25:57.998 [2024-12-11 15:02:40.614706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.998 [2024-12-11 15:02:40.614736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.998 qpair failed and we were unable to recover it. 00:25:57.998 [2024-12-11 15:02:40.614882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.998 [2024-12-11 15:02:40.614917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.998 qpair failed and we were unable to recover it. 00:25:57.998 [2024-12-11 15:02:40.615092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.998 [2024-12-11 15:02:40.615127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.998 qpair failed and we were unable to recover it. 00:25:57.998 [2024-12-11 15:02:40.615261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.998 [2024-12-11 15:02:40.615296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.998 qpair failed and we were unable to recover it. 00:25:57.998 [2024-12-11 15:02:40.615445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.998 [2024-12-11 15:02:40.615474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.998 qpair failed and we were unable to recover it. 00:25:57.998 [2024-12-11 15:02:40.615591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.999 [2024-12-11 15:02:40.615619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.999 qpair failed and we were unable to recover it. 00:25:57.999 [2024-12-11 15:02:40.615715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.999 [2024-12-11 15:02:40.615744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.999 qpair failed and we were unable to recover it. 00:25:57.999 [2024-12-11 15:02:40.615897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.999 [2024-12-11 15:02:40.615932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.999 qpair failed and we were unable to recover it. 00:25:57.999 [2024-12-11 15:02:40.616082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.999 [2024-12-11 15:02:40.616117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.999 qpair failed and we were unable to recover it. 00:25:57.999 [2024-12-11 15:02:40.616263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.999 [2024-12-11 15:02:40.616299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.999 qpair failed and we were unable to recover it. 00:25:57.999 [2024-12-11 15:02:40.616423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.999 [2024-12-11 15:02:40.616454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.999 qpair failed and we were unable to recover it. 00:25:57.999 [2024-12-11 15:02:40.616577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.999 [2024-12-11 15:02:40.616607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.999 qpair failed and we were unable to recover it. 00:25:57.999 [2024-12-11 15:02:40.616749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.999 [2024-12-11 15:02:40.616796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.999 qpair failed and we were unable to recover it. 00:25:57.999 [2024-12-11 15:02:40.616907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.999 [2024-12-11 15:02:40.616957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.999 qpair failed and we were unable to recover it. 00:25:57.999 [2024-12-11 15:02:40.617084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.999 [2024-12-11 15:02:40.617113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.999 qpair failed and we were unable to recover it. 00:25:57.999 [2024-12-11 15:02:40.617244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.999 [2024-12-11 15:02:40.617272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.999 qpair failed and we were unable to recover it. 00:25:57.999 [2024-12-11 15:02:40.617366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.999 [2024-12-11 15:02:40.617394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.999 qpair failed and we were unable to recover it. 00:25:57.999 [2024-12-11 15:02:40.617512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.999 [2024-12-11 15:02:40.617541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.999 qpair failed and we were unable to recover it. 00:25:57.999 [2024-12-11 15:02:40.617643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.999 [2024-12-11 15:02:40.617671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.999 qpair failed and we were unable to recover it. 00:25:57.999 [2024-12-11 15:02:40.617797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.999 [2024-12-11 15:02:40.617825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.999 qpair failed and we were unable to recover it. 00:25:57.999 [2024-12-11 15:02:40.617947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.999 [2024-12-11 15:02:40.617974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.999 qpair failed and we were unable to recover it. 00:25:57.999 [2024-12-11 15:02:40.618100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.999 [2024-12-11 15:02:40.618128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.999 qpair failed and we were unable to recover it. 00:25:57.999 [2024-12-11 15:02:40.618257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.999 [2024-12-11 15:02:40.618287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.999 qpair failed and we were unable to recover it. 00:25:57.999 [2024-12-11 15:02:40.618382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.999 [2024-12-11 15:02:40.618411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.999 qpair failed and we were unable to recover it. 00:25:57.999 [2024-12-11 15:02:40.618529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.999 [2024-12-11 15:02:40.618564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.999 qpair failed and we were unable to recover it. 00:25:57.999 [2024-12-11 15:02:40.618737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.999 [2024-12-11 15:02:40.618772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.999 qpair failed and we were unable to recover it. 00:25:57.999 [2024-12-11 15:02:40.618916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.999 [2024-12-11 15:02:40.618951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.999 qpair failed and we were unable to recover it. 00:25:57.999 [2024-12-11 15:02:40.619102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.999 [2024-12-11 15:02:40.619137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.999 qpair failed and we were unable to recover it. 00:25:57.999 [2024-12-11 15:02:40.619295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.999 [2024-12-11 15:02:40.619331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.999 qpair failed and we were unable to recover it. 00:25:57.999 [2024-12-11 15:02:40.619474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.999 [2024-12-11 15:02:40.619509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.999 qpair failed and we were unable to recover it. 00:25:57.999 [2024-12-11 15:02:40.619661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.999 [2024-12-11 15:02:40.619704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.999 qpair failed and we were unable to recover it. 00:25:57.999 [2024-12-11 15:02:40.619830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.999 [2024-12-11 15:02:40.619865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.999 qpair failed and we were unable to recover it. 00:25:57.999 [2024-12-11 15:02:40.619978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.999 [2024-12-11 15:02:40.620014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.999 qpair failed and we were unable to recover it. 00:25:57.999 [2024-12-11 15:02:40.620131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.999 [2024-12-11 15:02:40.620167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:57.999 qpair failed and we were unable to recover it. 00:25:57.999 [2024-12-11 15:02:40.620303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.999 [2024-12-11 15:02:40.620342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.999 qpair failed and we were unable to recover it. 00:25:57.999 [2024-12-11 15:02:40.620491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.999 [2024-12-11 15:02:40.620519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.999 qpair failed and we were unable to recover it. 00:25:57.999 [2024-12-11 15:02:40.620626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.999 [2024-12-11 15:02:40.620656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.999 qpair failed and we were unable to recover it. 00:25:57.999 [2024-12-11 15:02:40.620773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.999 [2024-12-11 15:02:40.620824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.999 qpair failed and we were unable to recover it. 00:25:57.999 [2024-12-11 15:02:40.620959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.999 [2024-12-11 15:02:40.621009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.999 qpair failed and we were unable to recover it. 00:25:57.999 [2024-12-11 15:02:40.621127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.999 [2024-12-11 15:02:40.621176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:57.999 qpair failed and we were unable to recover it. 00:25:57.999 [2024-12-11 15:02:40.621279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.000 [2024-12-11 15:02:40.621309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.000 qpair failed and we were unable to recover it. 00:25:58.000 [2024-12-11 15:02:40.621397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.000 [2024-12-11 15:02:40.621425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.000 qpair failed and we were unable to recover it. 00:25:58.000 [2024-12-11 15:02:40.621551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.000 [2024-12-11 15:02:40.621580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.000 qpair failed and we were unable to recover it. 00:25:58.000 [2024-12-11 15:02:40.621697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.000 [2024-12-11 15:02:40.621726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.000 qpair failed and we were unable to recover it. 00:25:58.000 [2024-12-11 15:02:40.621831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.000 [2024-12-11 15:02:40.621860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.000 qpair failed and we were unable to recover it. 00:25:58.000 [2024-12-11 15:02:40.621982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.000 [2024-12-11 15:02:40.622010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.000 qpair failed and we were unable to recover it. 00:25:58.000 [2024-12-11 15:02:40.622106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.000 [2024-12-11 15:02:40.622134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.000 qpair failed and we were unable to recover it. 00:25:58.000 [2024-12-11 15:02:40.622220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.000 [2024-12-11 15:02:40.622267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.000 qpair failed and we were unable to recover it. 00:25:58.000 [2024-12-11 15:02:40.622381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.000 [2024-12-11 15:02:40.622418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.000 qpair failed and we were unable to recover it. 00:25:58.000 [2024-12-11 15:02:40.622592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.000 [2024-12-11 15:02:40.622621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.000 qpair failed and we were unable to recover it. 00:25:58.000 [2024-12-11 15:02:40.622718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.000 [2024-12-11 15:02:40.622746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.000 qpair failed and we were unable to recover it. 00:25:58.000 [2024-12-11 15:02:40.622859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.000 [2024-12-11 15:02:40.622895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.000 qpair failed and we were unable to recover it. 00:25:58.000 [2024-12-11 15:02:40.623078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.000 [2024-12-11 15:02:40.623113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.000 qpair failed and we were unable to recover it. 00:25:58.000 [2024-12-11 15:02:40.623247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.000 [2024-12-11 15:02:40.623292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.000 qpair failed and we were unable to recover it. 00:25:58.000 [2024-12-11 15:02:40.623409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.000 [2024-12-11 15:02:40.623444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.000 qpair failed and we were unable to recover it. 00:25:58.000 [2024-12-11 15:02:40.623586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.000 [2024-12-11 15:02:40.623614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.000 qpair failed and we were unable to recover it. 00:25:58.000 [2024-12-11 15:02:40.623705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.000 [2024-12-11 15:02:40.623734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.000 qpair failed and we were unable to recover it. 00:25:58.000 [2024-12-11 15:02:40.623882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.000 [2024-12-11 15:02:40.623918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.000 qpair failed and we were unable to recover it. 00:25:58.000 [2024-12-11 15:02:40.624083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.000 [2024-12-11 15:02:40.624120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.000 qpair failed and we were unable to recover it. 00:25:58.000 [2024-12-11 15:02:40.624303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.000 [2024-12-11 15:02:40.624339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.000 qpair failed and we were unable to recover it. 00:25:58.000 [2024-12-11 15:02:40.624451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.000 [2024-12-11 15:02:40.624486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.000 qpair failed and we were unable to recover it. 00:25:58.000 [2024-12-11 15:02:40.624635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.000 [2024-12-11 15:02:40.624678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.000 qpair failed and we were unable to recover it. 00:25:58.000 [2024-12-11 15:02:40.624793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.000 [2024-12-11 15:02:40.624831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.000 qpair failed and we were unable to recover it. 00:25:58.000 [2024-12-11 15:02:40.624957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.000 [2024-12-11 15:02:40.624993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.000 qpair failed and we were unable to recover it. 00:25:58.000 [2024-12-11 15:02:40.625160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.000 [2024-12-11 15:02:40.625209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.000 qpair failed and we were unable to recover it. 00:25:58.000 [2024-12-11 15:02:40.625304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.000 [2024-12-11 15:02:40.625332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.000 qpair failed and we were unable to recover it. 00:25:58.000 [2024-12-11 15:02:40.625442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.000 [2024-12-11 15:02:40.625470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.000 qpair failed and we were unable to recover it. 00:25:58.000 [2024-12-11 15:02:40.625573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.000 [2024-12-11 15:02:40.625603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.000 qpair failed and we were unable to recover it. 00:25:58.000 [2024-12-11 15:02:40.625727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.000 [2024-12-11 15:02:40.625755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.000 qpair failed and we were unable to recover it. 00:25:58.000 [2024-12-11 15:02:40.625877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.000 [2024-12-11 15:02:40.625904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.000 qpair failed and we were unable to recover it. 00:25:58.000 [2024-12-11 15:02:40.626008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.000 [2024-12-11 15:02:40.626043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.000 qpair failed and we were unable to recover it. 00:25:58.000 [2024-12-11 15:02:40.626161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.000 [2024-12-11 15:02:40.626189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.000 qpair failed and we were unable to recover it. 00:25:58.000 [2024-12-11 15:02:40.626312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.000 [2024-12-11 15:02:40.626339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.000 qpair failed and we were unable to recover it. 00:25:58.000 [2024-12-11 15:02:40.626431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.000 [2024-12-11 15:02:40.626460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.000 qpair failed and we were unable to recover it. 00:25:58.000 [2024-12-11 15:02:40.626558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.000 [2024-12-11 15:02:40.626588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.000 qpair failed and we were unable to recover it. 00:25:58.000 [2024-12-11 15:02:40.626719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.000 [2024-12-11 15:02:40.626746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.000 qpair failed and we were unable to recover it. 00:25:58.000 [2024-12-11 15:02:40.626865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.000 [2024-12-11 15:02:40.626914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.000 qpair failed and we were unable to recover it. 00:25:58.000 [2024-12-11 15:02:40.627089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.000 [2024-12-11 15:02:40.627138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.000 qpair failed and we were unable to recover it. 00:25:58.000 [2024-12-11 15:02:40.627321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.000 [2024-12-11 15:02:40.627369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.000 qpair failed and we were unable to recover it. 00:25:58.000 [2024-12-11 15:02:40.627475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.001 [2024-12-11 15:02:40.627503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.001 qpair failed and we were unable to recover it. 00:25:58.001 [2024-12-11 15:02:40.627626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.001 [2024-12-11 15:02:40.627677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.001 qpair failed and we were unable to recover it. 00:25:58.001 [2024-12-11 15:02:40.627814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.001 [2024-12-11 15:02:40.627869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.001 qpair failed and we were unable to recover it. 00:25:58.001 [2024-12-11 15:02:40.627986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.001 [2024-12-11 15:02:40.628015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.001 qpair failed and we were unable to recover it. 00:25:58.001 [2024-12-11 15:02:40.628135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.001 [2024-12-11 15:02:40.628163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.001 qpair failed and we were unable to recover it. 00:25:58.001 [2024-12-11 15:02:40.628269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.001 [2024-12-11 15:02:40.628297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.001 qpair failed and we were unable to recover it. 00:25:58.001 [2024-12-11 15:02:40.628414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.001 [2024-12-11 15:02:40.628444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.001 qpair failed and we were unable to recover it. 00:25:58.001 [2024-12-11 15:02:40.628539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.001 [2024-12-11 15:02:40.628577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.001 qpair failed and we were unable to recover it. 00:25:58.001 [2024-12-11 15:02:40.628675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.001 [2024-12-11 15:02:40.628703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.001 qpair failed and we were unable to recover it. 00:25:58.001 [2024-12-11 15:02:40.628789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.001 [2024-12-11 15:02:40.628817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.001 qpair failed and we were unable to recover it. 00:25:58.001 [2024-12-11 15:02:40.628899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.001 [2024-12-11 15:02:40.628927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.001 qpair failed and we were unable to recover it. 00:25:58.001 [2024-12-11 15:02:40.629045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.001 [2024-12-11 15:02:40.629073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.001 qpair failed and we were unable to recover it. 00:25:58.001 [2024-12-11 15:02:40.629221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.001 [2024-12-11 15:02:40.629258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.001 qpair failed and we were unable to recover it. 00:25:58.001 [2024-12-11 15:02:40.629417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.001 [2024-12-11 15:02:40.629452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.001 qpair failed and we were unable to recover it. 00:25:58.001 [2024-12-11 15:02:40.629576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.001 [2024-12-11 15:02:40.629622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.001 qpair failed and we were unable to recover it. 00:25:58.001 [2024-12-11 15:02:40.629743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.001 [2024-12-11 15:02:40.629778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.001 qpair failed and we were unable to recover it. 00:25:58.001 [2024-12-11 15:02:40.629889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.001 [2024-12-11 15:02:40.629922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.001 qpair failed and we were unable to recover it. 00:25:58.001 [2024-12-11 15:02:40.630061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.001 [2024-12-11 15:02:40.630095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.001 qpair failed and we were unable to recover it. 00:25:58.001 [2024-12-11 15:02:40.630264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.001 [2024-12-11 15:02:40.630320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.001 qpair failed and we were unable to recover it. 00:25:58.001 [2024-12-11 15:02:40.630471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.001 [2024-12-11 15:02:40.630500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.001 qpair failed and we were unable to recover it. 00:25:58.001 [2024-12-11 15:02:40.630651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.001 [2024-12-11 15:02:40.630704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.001 qpair failed and we were unable to recover it. 00:25:58.001 [2024-12-11 15:02:40.630809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.001 [2024-12-11 15:02:40.630843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.001 qpair failed and we were unable to recover it. 00:25:58.001 [2024-12-11 15:02:40.630953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.001 [2024-12-11 15:02:40.630982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.001 qpair failed and we were unable to recover it. 00:25:58.001 [2024-12-11 15:02:40.631148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.001 [2024-12-11 15:02:40.631177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.001 qpair failed and we were unable to recover it. 00:25:58.001 [2024-12-11 15:02:40.631295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.001 [2024-12-11 15:02:40.631323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.001 qpair failed and we were unable to recover it. 00:25:58.001 [2024-12-11 15:02:40.631447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.001 [2024-12-11 15:02:40.631476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.001 qpair failed and we were unable to recover it. 00:25:58.001 [2024-12-11 15:02:40.631567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.001 [2024-12-11 15:02:40.631596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.001 qpair failed and we were unable to recover it. 00:25:58.001 [2024-12-11 15:02:40.631712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.001 [2024-12-11 15:02:40.631741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.001 qpair failed and we were unable to recover it. 00:25:58.001 [2024-12-11 15:02:40.631829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.001 [2024-12-11 15:02:40.631857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.001 qpair failed and we were unable to recover it. 00:25:58.001 [2024-12-11 15:02:40.631956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.001 [2024-12-11 15:02:40.631984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.001 qpair failed and we were unable to recover it. 00:25:58.001 [2024-12-11 15:02:40.632080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.001 [2024-12-11 15:02:40.632109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.001 qpair failed and we were unable to recover it. 00:25:58.001 [2024-12-11 15:02:40.632259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.001 [2024-12-11 15:02:40.632294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.001 qpair failed and we were unable to recover it. 00:25:58.001 [2024-12-11 15:02:40.632384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.001 [2024-12-11 15:02:40.632413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.001 qpair failed and we were unable to recover it. 00:25:58.001 [2024-12-11 15:02:40.632513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.001 [2024-12-11 15:02:40.632550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.001 qpair failed and we were unable to recover it. 00:25:58.001 [2024-12-11 15:02:40.632676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.001 [2024-12-11 15:02:40.632703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.001 qpair failed and we were unable to recover it. 00:25:58.001 [2024-12-11 15:02:40.632821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.001 [2024-12-11 15:02:40.632849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.001 qpair failed and we were unable to recover it. 00:25:58.001 [2024-12-11 15:02:40.632968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.001 [2024-12-11 15:02:40.632997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.001 qpair failed and we were unable to recover it. 00:25:58.001 [2024-12-11 15:02:40.633121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.001 [2024-12-11 15:02:40.633149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.001 qpair failed and we were unable to recover it. 00:25:58.001 [2024-12-11 15:02:40.633242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.001 [2024-12-11 15:02:40.633271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.001 qpair failed and we were unable to recover it. 00:25:58.002 [2024-12-11 15:02:40.633395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.002 [2024-12-11 15:02:40.633425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.002 qpair failed and we were unable to recover it. 00:25:58.002 [2024-12-11 15:02:40.633553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.002 [2024-12-11 15:02:40.633582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.002 qpair failed and we were unable to recover it. 00:25:58.002 [2024-12-11 15:02:40.633731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.002 [2024-12-11 15:02:40.633780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.002 qpair failed and we were unable to recover it. 00:25:58.002 [2024-12-11 15:02:40.633957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.002 [2024-12-11 15:02:40.634007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.002 qpair failed and we were unable to recover it. 00:25:58.002 [2024-12-11 15:02:40.634142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.002 [2024-12-11 15:02:40.634191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.002 qpair failed and we were unable to recover it. 00:25:58.002 [2024-12-11 15:02:40.634283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.002 [2024-12-11 15:02:40.634312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.002 qpair failed and we were unable to recover it. 00:25:58.002 [2024-12-11 15:02:40.634465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.002 [2024-12-11 15:02:40.634493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.002 qpair failed and we were unable to recover it. 00:25:58.002 [2024-12-11 15:02:40.634616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.002 [2024-12-11 15:02:40.634664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.002 qpair failed and we were unable to recover it. 00:25:58.002 [2024-12-11 15:02:40.634752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.002 [2024-12-11 15:02:40.634782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.002 qpair failed and we were unable to recover it. 00:25:58.002 [2024-12-11 15:02:40.634904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.002 [2024-12-11 15:02:40.634932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.002 qpair failed and we were unable to recover it. 00:25:58.002 [2024-12-11 15:02:40.635056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.002 [2024-12-11 15:02:40.635084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.002 qpair failed and we were unable to recover it. 00:25:58.002 [2024-12-11 15:02:40.635189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.002 [2024-12-11 15:02:40.635217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.002 qpair failed and we were unable to recover it. 00:25:58.002 [2024-12-11 15:02:40.635310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.002 [2024-12-11 15:02:40.635338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.002 qpair failed and we were unable to recover it. 00:25:58.002 [2024-12-11 15:02:40.635466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.002 [2024-12-11 15:02:40.635494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.002 qpair failed and we were unable to recover it. 00:25:58.002 [2024-12-11 15:02:40.635618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.002 [2024-12-11 15:02:40.635646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.002 qpair failed and we were unable to recover it. 00:25:58.002 [2024-12-11 15:02:40.635769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.002 [2024-12-11 15:02:40.635797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.002 qpair failed and we were unable to recover it. 00:25:58.002 [2024-12-11 15:02:40.635956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.002 [2024-12-11 15:02:40.635984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.002 qpair failed and we were unable to recover it. 00:25:58.002 [2024-12-11 15:02:40.636078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.002 [2024-12-11 15:02:40.636106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.002 qpair failed and we were unable to recover it. 00:25:58.002 [2024-12-11 15:02:40.636193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.002 [2024-12-11 15:02:40.636222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.002 qpair failed and we were unable to recover it. 00:25:58.002 [2024-12-11 15:02:40.636323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.002 [2024-12-11 15:02:40.636352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.002 qpair failed and we were unable to recover it. 00:25:58.002 [2024-12-11 15:02:40.636499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.002 [2024-12-11 15:02:40.636527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.002 qpair failed and we were unable to recover it. 00:25:58.002 [2024-12-11 15:02:40.636661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.002 [2024-12-11 15:02:40.636689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.002 qpair failed and we were unable to recover it. 00:25:58.002 [2024-12-11 15:02:40.636796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.002 [2024-12-11 15:02:40.636824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.002 qpair failed and we were unable to recover it. 00:25:58.002 [2024-12-11 15:02:40.636921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.002 [2024-12-11 15:02:40.636949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.002 qpair failed and we were unable to recover it. 00:25:58.002 [2024-12-11 15:02:40.637043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.002 [2024-12-11 15:02:40.637071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.002 qpair failed and we were unable to recover it. 00:25:58.002 [2024-12-11 15:02:40.637194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.002 [2024-12-11 15:02:40.637222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.002 qpair failed and we were unable to recover it. 00:25:58.002 [2024-12-11 15:02:40.637343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.002 [2024-12-11 15:02:40.637371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.002 qpair failed and we were unable to recover it. 00:25:58.002 [2024-12-11 15:02:40.637489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.002 [2024-12-11 15:02:40.637517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.002 qpair failed and we were unable to recover it. 00:25:58.002 [2024-12-11 15:02:40.637625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.002 [2024-12-11 15:02:40.637653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.002 qpair failed and we were unable to recover it. 00:25:58.002 [2024-12-11 15:02:40.637742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.002 [2024-12-11 15:02:40.637770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.002 qpair failed and we were unable to recover it. 00:25:58.002 [2024-12-11 15:02:40.637856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.002 [2024-12-11 15:02:40.637885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.002 qpair failed and we were unable to recover it. 00:25:58.002 [2024-12-11 15:02:40.637980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.002 [2024-12-11 15:02:40.638008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.002 qpair failed and we were unable to recover it. 00:25:58.002 [2024-12-11 15:02:40.638128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.002 [2024-12-11 15:02:40.638162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.002 qpair failed and we were unable to recover it. 00:25:58.002 [2024-12-11 15:02:40.638287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.002 [2024-12-11 15:02:40.638315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.002 qpair failed and we were unable to recover it. 00:25:58.002 [2024-12-11 15:02:40.638442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.002 [2024-12-11 15:02:40.638470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.002 qpair failed and we were unable to recover it. 00:25:58.002 [2024-12-11 15:02:40.638558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.002 [2024-12-11 15:02:40.638587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.002 qpair failed and we were unable to recover it. 00:25:58.002 [2024-12-11 15:02:40.638735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.002 [2024-12-11 15:02:40.638762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.002 qpair failed and we were unable to recover it. 00:25:58.002 [2024-12-11 15:02:40.638884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.002 [2024-12-11 15:02:40.638913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.002 qpair failed and we were unable to recover it. 00:25:58.002 [2024-12-11 15:02:40.639005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.002 [2024-12-11 15:02:40.639033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.002 qpair failed and we were unable to recover it. 00:25:58.002 [2024-12-11 15:02:40.639155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.003 [2024-12-11 15:02:40.639183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.003 qpair failed and we were unable to recover it. 00:25:58.003 [2024-12-11 15:02:40.639276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.003 [2024-12-11 15:02:40.639303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.003 qpair failed and we were unable to recover it. 00:25:58.003 [2024-12-11 15:02:40.639392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.003 [2024-12-11 15:02:40.639419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.003 qpair failed and we were unable to recover it. 00:25:58.003 [2024-12-11 15:02:40.639507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.003 [2024-12-11 15:02:40.639535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.003 qpair failed and we were unable to recover it. 00:25:58.003 [2024-12-11 15:02:40.639638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.003 [2024-12-11 15:02:40.639668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.003 qpair failed and we were unable to recover it. 00:25:58.003 [2024-12-11 15:02:40.639792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.003 [2024-12-11 15:02:40.639820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.003 qpair failed and we were unable to recover it. 00:25:58.003 [2024-12-11 15:02:40.639910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.003 [2024-12-11 15:02:40.639938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.003 qpair failed and we were unable to recover it. 00:25:58.003 [2024-12-11 15:02:40.640037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.003 [2024-12-11 15:02:40.640065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.003 qpair failed and we were unable to recover it. 00:25:58.003 [2024-12-11 15:02:40.640211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.003 [2024-12-11 15:02:40.640239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.003 qpair failed and we were unable to recover it. 00:25:58.003 [2024-12-11 15:02:40.640378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.003 [2024-12-11 15:02:40.640420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.003 qpair failed and we were unable to recover it. 00:25:58.003 [2024-12-11 15:02:40.640522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.003 [2024-12-11 15:02:40.640567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.003 qpair failed and we were unable to recover it. 00:25:58.003 [2024-12-11 15:02:40.640670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.003 [2024-12-11 15:02:40.640699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.003 qpair failed and we were unable to recover it. 00:25:58.003 [2024-12-11 15:02:40.640792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.003 [2024-12-11 15:02:40.640820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.003 qpair failed and we were unable to recover it. 00:25:58.003 [2024-12-11 15:02:40.640912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.003 [2024-12-11 15:02:40.640941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.003 qpair failed and we were unable to recover it. 00:25:58.003 [2024-12-11 15:02:40.641039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.003 [2024-12-11 15:02:40.641068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.003 qpair failed and we were unable to recover it. 00:25:58.003 [2024-12-11 15:02:40.641236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.003 [2024-12-11 15:02:40.641265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.003 qpair failed and we were unable to recover it. 00:25:58.003 [2024-12-11 15:02:40.641397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.003 [2024-12-11 15:02:40.641424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.003 qpair failed and we were unable to recover it. 00:25:58.003 [2024-12-11 15:02:40.641552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.003 [2024-12-11 15:02:40.641582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.003 qpair failed and we were unable to recover it. 00:25:58.003 [2024-12-11 15:02:40.641721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.003 [2024-12-11 15:02:40.641769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.003 qpair failed and we were unable to recover it. 00:25:58.003 [2024-12-11 15:02:40.641908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.003 [2024-12-11 15:02:40.641956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.003 qpair failed and we were unable to recover it. 00:25:58.003 [2024-12-11 15:02:40.642056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.003 [2024-12-11 15:02:40.642085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.003 qpair failed and we were unable to recover it. 00:25:58.003 [2024-12-11 15:02:40.642208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.003 [2024-12-11 15:02:40.642236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.003 qpair failed and we were unable to recover it. 00:25:58.003 [2024-12-11 15:02:40.642354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.003 [2024-12-11 15:02:40.642381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.003 qpair failed and we were unable to recover it. 00:25:58.003 [2024-12-11 15:02:40.642534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.003 [2024-12-11 15:02:40.642568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.003 qpair failed and we were unable to recover it. 00:25:58.003 [2024-12-11 15:02:40.642661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.003 [2024-12-11 15:02:40.642691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.003 qpair failed and we were unable to recover it. 00:25:58.003 [2024-12-11 15:02:40.642779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.003 [2024-12-11 15:02:40.642809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.003 qpair failed and we were unable to recover it. 00:25:58.003 [2024-12-11 15:02:40.642905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.003 [2024-12-11 15:02:40.642934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.003 qpair failed and we were unable to recover it. 00:25:58.003 [2024-12-11 15:02:40.643082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.003 [2024-12-11 15:02:40.643110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.003 qpair failed and we were unable to recover it. 00:25:58.003 [2024-12-11 15:02:40.643232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.003 [2024-12-11 15:02:40.643259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.003 qpair failed and we were unable to recover it. 00:25:58.003 [2024-12-11 15:02:40.643404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.003 [2024-12-11 15:02:40.643431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.003 qpair failed and we were unable to recover it. 00:25:58.003 [2024-12-11 15:02:40.643592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.003 [2024-12-11 15:02:40.643644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.003 qpair failed and we were unable to recover it. 00:25:58.003 [2024-12-11 15:02:40.643782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.003 [2024-12-11 15:02:40.643821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.003 qpair failed and we were unable to recover it. 00:25:58.003 [2024-12-11 15:02:40.643934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.003 [2024-12-11 15:02:40.643969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.003 qpair failed and we were unable to recover it. 00:25:58.003 [2024-12-11 15:02:40.644147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.004 [2024-12-11 15:02:40.644182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.004 qpair failed and we were unable to recover it. 00:25:58.004 [2024-12-11 15:02:40.644304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.004 [2024-12-11 15:02:40.644333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.004 qpair failed and we were unable to recover it. 00:25:58.004 [2024-12-11 15:02:40.644420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.004 [2024-12-11 15:02:40.644448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.004 qpair failed and we were unable to recover it. 00:25:58.004 [2024-12-11 15:02:40.644538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.004 [2024-12-11 15:02:40.644574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.004 qpair failed and we were unable to recover it. 00:25:58.004 [2024-12-11 15:02:40.644704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.004 [2024-12-11 15:02:40.644733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.004 qpair failed and we were unable to recover it. 00:25:58.004 [2024-12-11 15:02:40.644886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.004 [2024-12-11 15:02:40.644920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.004 qpair failed and we were unable to recover it. 00:25:58.004 [2024-12-11 15:02:40.645054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.004 [2024-12-11 15:02:40.645088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.004 qpair failed and we were unable to recover it. 00:25:58.004 [2024-12-11 15:02:40.645198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.004 [2024-12-11 15:02:40.645231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.004 qpair failed and we were unable to recover it. 00:25:58.004 [2024-12-11 15:02:40.645405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.004 [2024-12-11 15:02:40.645438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.004 qpair failed and we were unable to recover it. 00:25:58.004 [2024-12-11 15:02:40.645612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.004 [2024-12-11 15:02:40.645641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.004 qpair failed and we were unable to recover it. 00:25:58.004 [2024-12-11 15:02:40.645731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.004 [2024-12-11 15:02:40.645780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.004 qpair failed and we were unable to recover it. 00:25:58.004 [2024-12-11 15:02:40.645882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.004 [2024-12-11 15:02:40.645916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.004 qpair failed and we were unable to recover it. 00:25:58.004 [2024-12-11 15:02:40.646088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.004 [2024-12-11 15:02:40.646122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.004 qpair failed and we were unable to recover it. 00:25:58.004 [2024-12-11 15:02:40.646234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.004 [2024-12-11 15:02:40.646267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.004 qpair failed and we were unable to recover it. 00:25:58.004 [2024-12-11 15:02:40.646392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.004 [2024-12-11 15:02:40.646421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.004 qpair failed and we were unable to recover it. 00:25:58.004 [2024-12-11 15:02:40.646510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.004 [2024-12-11 15:02:40.646538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.004 qpair failed and we were unable to recover it. 00:25:58.004 [2024-12-11 15:02:40.646677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.004 [2024-12-11 15:02:40.646704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.004 qpair failed and we were unable to recover it. 00:25:58.004 [2024-12-11 15:02:40.646843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.004 [2024-12-11 15:02:40.646877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.004 qpair failed and we were unable to recover it. 00:25:58.004 [2024-12-11 15:02:40.647000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.004 [2024-12-11 15:02:40.647046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.004 qpair failed and we were unable to recover it. 00:25:58.004 [2024-12-11 15:02:40.647155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.004 [2024-12-11 15:02:40.647188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.004 qpair failed and we were unable to recover it. 00:25:58.004 [2024-12-11 15:02:40.647308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.004 [2024-12-11 15:02:40.647342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.004 qpair failed and we were unable to recover it. 00:25:58.004 [2024-12-11 15:02:40.647533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.004 [2024-12-11 15:02:40.647582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.004 qpair failed and we were unable to recover it. 00:25:58.004 [2024-12-11 15:02:40.647716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.004 [2024-12-11 15:02:40.647747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.004 qpair failed and we were unable to recover it. 00:25:58.004 [2024-12-11 15:02:40.647858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.004 [2024-12-11 15:02:40.647892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.004 qpair failed and we were unable to recover it. 00:25:58.004 [2024-12-11 15:02:40.648018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.004 [2024-12-11 15:02:40.648067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.004 qpair failed and we were unable to recover it. 00:25:58.004 [2024-12-11 15:02:40.648192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.004 [2024-12-11 15:02:40.648226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.004 qpair failed and we were unable to recover it. 00:25:58.004 [2024-12-11 15:02:40.648343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.004 [2024-12-11 15:02:40.648371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.004 qpair failed and we were unable to recover it. 00:25:58.004 [2024-12-11 15:02:40.648465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.004 [2024-12-11 15:02:40.648495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.004 qpair failed and we were unable to recover it. 00:25:58.004 [2024-12-11 15:02:40.648599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.004 [2024-12-11 15:02:40.648628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.004 qpair failed and we were unable to recover it. 00:25:58.004 [2024-12-11 15:02:40.648742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.004 [2024-12-11 15:02:40.648770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.004 qpair failed and we were unable to recover it. 00:25:58.004 [2024-12-11 15:02:40.648859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.004 [2024-12-11 15:02:40.648887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.004 qpair failed and we were unable to recover it. 00:25:58.004 [2024-12-11 15:02:40.649001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.004 [2024-12-11 15:02:40.649028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.004 qpair failed and we were unable to recover it. 00:25:58.004 [2024-12-11 15:02:40.649179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.004 [2024-12-11 15:02:40.649207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.004 qpair failed and we were unable to recover it. 00:25:58.004 [2024-12-11 15:02:40.649328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.004 [2024-12-11 15:02:40.649361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.004 qpair failed and we were unable to recover it. 00:25:58.004 [2024-12-11 15:02:40.649476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.004 [2024-12-11 15:02:40.649509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.004 qpair failed and we were unable to recover it. 00:25:58.004 [2024-12-11 15:02:40.649637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.004 [2024-12-11 15:02:40.649666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.004 qpair failed and we were unable to recover it. 00:25:58.004 [2024-12-11 15:02:40.649834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.004 [2024-12-11 15:02:40.649867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.004 qpair failed and we were unable to recover it. 00:25:58.004 [2024-12-11 15:02:40.650005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.004 [2024-12-11 15:02:40.650041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.004 qpair failed and we were unable to recover it. 00:25:58.004 [2024-12-11 15:02:40.650179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.004 [2024-12-11 15:02:40.650213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.004 qpair failed and we were unable to recover it. 00:25:58.004 [2024-12-11 15:02:40.650379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.005 [2024-12-11 15:02:40.650429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.005 qpair failed and we were unable to recover it. 00:25:58.005 [2024-12-11 15:02:40.650527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.005 [2024-12-11 15:02:40.650567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.005 qpair failed and we were unable to recover it. 00:25:58.005 [2024-12-11 15:02:40.650673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.005 [2024-12-11 15:02:40.650701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.005 qpair failed and we were unable to recover it. 00:25:58.005 [2024-12-11 15:02:40.650878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.005 [2024-12-11 15:02:40.650925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.005 qpair failed and we were unable to recover it. 00:25:58.005 [2024-12-11 15:02:40.651050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.005 [2024-12-11 15:02:40.651078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.005 qpair failed and we were unable to recover it. 00:25:58.005 [2024-12-11 15:02:40.651180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.005 [2024-12-11 15:02:40.651209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.005 qpair failed and we were unable to recover it. 00:25:58.005 [2024-12-11 15:02:40.651326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.005 [2024-12-11 15:02:40.651354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.005 qpair failed and we were unable to recover it. 00:25:58.005 [2024-12-11 15:02:40.651484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.005 [2024-12-11 15:02:40.651513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.005 qpair failed and we were unable to recover it. 00:25:58.005 [2024-12-11 15:02:40.651639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.005 [2024-12-11 15:02:40.651667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.005 qpair failed and we were unable to recover it. 00:25:58.005 [2024-12-11 15:02:40.651789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.005 [2024-12-11 15:02:40.651817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.005 qpair failed and we were unable to recover it. 00:25:58.005 [2024-12-11 15:02:40.651951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.005 [2024-12-11 15:02:40.651980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.005 qpair failed and we were unable to recover it. 00:25:58.005 [2024-12-11 15:02:40.652122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.005 [2024-12-11 15:02:40.652167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.005 qpair failed and we were unable to recover it. 00:25:58.005 [2024-12-11 15:02:40.652287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.005 [2024-12-11 15:02:40.652316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.005 qpair failed and we were unable to recover it. 00:25:58.005 [2024-12-11 15:02:40.652409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.005 [2024-12-11 15:02:40.652436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.005 qpair failed and we were unable to recover it. 00:25:58.005 [2024-12-11 15:02:40.652560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.005 [2024-12-11 15:02:40.652589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.005 qpair failed and we were unable to recover it. 00:25:58.005 [2024-12-11 15:02:40.652730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.005 [2024-12-11 15:02:40.652759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.005 qpair failed and we were unable to recover it. 00:25:58.005 [2024-12-11 15:02:40.652881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.005 [2024-12-11 15:02:40.652909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.005 qpair failed and we were unable to recover it. 00:25:58.005 [2024-12-11 15:02:40.653054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.005 [2024-12-11 15:02:40.653082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.005 qpair failed and we were unable to recover it. 00:25:58.005 [2024-12-11 15:02:40.653198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.005 [2024-12-11 15:02:40.653227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.005 qpair failed and we were unable to recover it. 00:25:58.005 [2024-12-11 15:02:40.653346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.005 [2024-12-11 15:02:40.653373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.005 qpair failed and we were unable to recover it. 00:25:58.005 [2024-12-11 15:02:40.653488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.005 [2024-12-11 15:02:40.653515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.005 qpair failed and we were unable to recover it. 00:25:58.005 [2024-12-11 15:02:40.653615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.005 [2024-12-11 15:02:40.653645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.005 qpair failed and we were unable to recover it. 00:25:58.005 [2024-12-11 15:02:40.653788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.005 [2024-12-11 15:02:40.653834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.005 qpair failed and we were unable to recover it. 00:25:58.005 [2024-12-11 15:02:40.653974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.005 [2024-12-11 15:02:40.654020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.005 qpair failed and we were unable to recover it. 00:25:58.005 [2024-12-11 15:02:40.654184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.005 [2024-12-11 15:02:40.654231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.005 qpair failed and we were unable to recover it. 00:25:58.005 [2024-12-11 15:02:40.654326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.005 [2024-12-11 15:02:40.654354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.005 qpair failed and we were unable to recover it. 00:25:58.005 [2024-12-11 15:02:40.654470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.005 [2024-12-11 15:02:40.654497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.005 qpair failed and we were unable to recover it. 00:25:58.005 [2024-12-11 15:02:40.654606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.005 [2024-12-11 15:02:40.654655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.005 qpair failed and we were unable to recover it. 00:25:58.005 [2024-12-11 15:02:40.654751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.005 [2024-12-11 15:02:40.654784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.005 qpair failed and we were unable to recover it. 00:25:58.005 [2024-12-11 15:02:40.654935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.005 [2024-12-11 15:02:40.654963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.005 qpair failed and we were unable to recover it. 00:25:58.005 [2024-12-11 15:02:40.655089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.005 [2024-12-11 15:02:40.655119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.005 qpair failed and we were unable to recover it. 00:25:58.005 [2024-12-11 15:02:40.655270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.005 [2024-12-11 15:02:40.655298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.005 qpair failed and we were unable to recover it. 00:25:58.005 [2024-12-11 15:02:40.655419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.005 [2024-12-11 15:02:40.655446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.005 qpair failed and we were unable to recover it. 00:25:58.005 [2024-12-11 15:02:40.655536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.005 [2024-12-11 15:02:40.655589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.005 qpair failed and we were unable to recover it. 00:25:58.005 [2024-12-11 15:02:40.655701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.005 [2024-12-11 15:02:40.655735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.005 qpair failed and we were unable to recover it. 00:25:58.005 [2024-12-11 15:02:40.655855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.005 [2024-12-11 15:02:40.655887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.005 qpair failed and we were unable to recover it. 00:25:58.005 [2024-12-11 15:02:40.656054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.005 [2024-12-11 15:02:40.656102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.005 qpair failed and we were unable to recover it. 00:25:58.005 [2024-12-11 15:02:40.656220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.005 [2024-12-11 15:02:40.656269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.005 qpair failed and we were unable to recover it. 00:25:58.005 [2024-12-11 15:02:40.656359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.005 [2024-12-11 15:02:40.656387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.005 qpair failed and we were unable to recover it. 00:25:58.005 [2024-12-11 15:02:40.656502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.006 [2024-12-11 15:02:40.656531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.006 qpair failed and we were unable to recover it. 00:25:58.006 [2024-12-11 15:02:40.656669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.006 [2024-12-11 15:02:40.656697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.006 qpair failed and we were unable to recover it. 00:25:58.006 [2024-12-11 15:02:40.656820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.006 [2024-12-11 15:02:40.656848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.006 qpair failed and we were unable to recover it. 00:25:58.006 [2024-12-11 15:02:40.656952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.006 [2024-12-11 15:02:40.656981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.006 qpair failed and we were unable to recover it. 00:25:58.006 [2024-12-11 15:02:40.657068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.006 [2024-12-11 15:02:40.657096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.006 qpair failed and we were unable to recover it. 00:25:58.006 [2024-12-11 15:02:40.657216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.006 [2024-12-11 15:02:40.657243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.006 qpair failed and we were unable to recover it. 00:25:58.006 [2024-12-11 15:02:40.657361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.006 [2024-12-11 15:02:40.657388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.006 qpair failed and we were unable to recover it. 00:25:58.006 [2024-12-11 15:02:40.657519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.006 [2024-12-11 15:02:40.657553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.006 qpair failed and we were unable to recover it. 00:25:58.006 [2024-12-11 15:02:40.657649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.006 [2024-12-11 15:02:40.657678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.006 qpair failed and we were unable to recover it. 00:25:58.006 [2024-12-11 15:02:40.657782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.006 [2024-12-11 15:02:40.657816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.006 qpair failed and we were unable to recover it. 00:25:58.006 [2024-12-11 15:02:40.657972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.006 [2024-12-11 15:02:40.658018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.006 qpair failed and we were unable to recover it. 00:25:58.006 [2024-12-11 15:02:40.658125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.006 [2024-12-11 15:02:40.658158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.006 qpair failed and we were unable to recover it. 00:25:58.006 [2024-12-11 15:02:40.658276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.006 [2024-12-11 15:02:40.658304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.006 qpair failed and we were unable to recover it. 00:25:58.006 [2024-12-11 15:02:40.658419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.006 [2024-12-11 15:02:40.658446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.006 qpair failed and we were unable to recover it. 00:25:58.006 [2024-12-11 15:02:40.658561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.006 [2024-12-11 15:02:40.658589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.006 qpair failed and we were unable to recover it. 00:25:58.006 [2024-12-11 15:02:40.658735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.006 [2024-12-11 15:02:40.658763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.006 qpair failed and we were unable to recover it. 00:25:58.006 [2024-12-11 15:02:40.658861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.006 [2024-12-11 15:02:40.658890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.006 qpair failed and we were unable to recover it. 00:25:58.006 [2024-12-11 15:02:40.658976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.006 [2024-12-11 15:02:40.659006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.006 qpair failed and we were unable to recover it. 00:25:58.006 [2024-12-11 15:02:40.659125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.006 [2024-12-11 15:02:40.659153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.006 qpair failed and we were unable to recover it. 00:25:58.006 [2024-12-11 15:02:40.659239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.006 [2024-12-11 15:02:40.659267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.006 qpair failed and we were unable to recover it. 00:25:58.006 [2024-12-11 15:02:40.659353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.006 [2024-12-11 15:02:40.659382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.006 qpair failed and we were unable to recover it. 00:25:58.006 [2024-12-11 15:02:40.659518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.006 [2024-12-11 15:02:40.659568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.006 qpair failed and we were unable to recover it. 00:25:58.006 [2024-12-11 15:02:40.659704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.006 [2024-12-11 15:02:40.659736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.006 qpair failed and we were unable to recover it. 00:25:58.006 [2024-12-11 15:02:40.659832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.006 [2024-12-11 15:02:40.659861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.006 qpair failed and we were unable to recover it. 00:25:58.006 [2024-12-11 15:02:40.659962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.006 [2024-12-11 15:02:40.660008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.006 qpair failed and we were unable to recover it. 00:25:58.006 [2024-12-11 15:02:40.660131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.006 [2024-12-11 15:02:40.660159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.006 qpair failed and we were unable to recover it. 00:25:58.006 [2024-12-11 15:02:40.660282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.006 [2024-12-11 15:02:40.660310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.006 qpair failed and we were unable to recover it. 00:25:58.006 [2024-12-11 15:02:40.660423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.006 [2024-12-11 15:02:40.660451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.006 qpair failed and we were unable to recover it. 00:25:58.006 [2024-12-11 15:02:40.660570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.006 [2024-12-11 15:02:40.660633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.006 qpair failed and we were unable to recover it. 00:25:58.006 [2024-12-11 15:02:40.660769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.006 [2024-12-11 15:02:40.660809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.006 qpair failed and we were unable to recover it. 00:25:58.006 [2024-12-11 15:02:40.661043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.006 [2024-12-11 15:02:40.661076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.006 qpair failed and we were unable to recover it. 00:25:58.006 [2024-12-11 15:02:40.661213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.006 [2024-12-11 15:02:40.661247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.006 qpair failed and we were unable to recover it. 00:25:58.006 [2024-12-11 15:02:40.661401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.006 [2024-12-11 15:02:40.661454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.006 qpair failed and we were unable to recover it. 00:25:58.006 [2024-12-11 15:02:40.661588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.006 [2024-12-11 15:02:40.661617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.006 qpair failed and we were unable to recover it. 00:25:58.006 [2024-12-11 15:02:40.661765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.006 [2024-12-11 15:02:40.661793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.006 qpair failed and we were unable to recover it. 00:25:58.006 [2024-12-11 15:02:40.661912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.006 [2024-12-11 15:02:40.661944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.006 qpair failed and we were unable to recover it. 00:25:58.006 [2024-12-11 15:02:40.662128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.006 [2024-12-11 15:02:40.662160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.006 qpair failed and we were unable to recover it. 00:25:58.006 [2024-12-11 15:02:40.662256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.006 [2024-12-11 15:02:40.662288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.006 qpair failed and we were unable to recover it. 00:25:58.006 [2024-12-11 15:02:40.662409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.006 [2024-12-11 15:02:40.662436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.006 qpair failed and we were unable to recover it. 00:25:58.006 [2024-12-11 15:02:40.662532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.007 [2024-12-11 15:02:40.662569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.007 qpair failed and we were unable to recover it. 00:25:58.007 [2024-12-11 15:02:40.662708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.007 [2024-12-11 15:02:40.662741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.007 qpair failed and we were unable to recover it. 00:25:58.007 [2024-12-11 15:02:40.662877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.007 [2024-12-11 15:02:40.662910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.007 qpair failed and we were unable to recover it. 00:25:58.007 [2024-12-11 15:02:40.663036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.007 [2024-12-11 15:02:40.663068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.007 qpair failed and we were unable to recover it. 00:25:58.007 [2024-12-11 15:02:40.663181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.007 [2024-12-11 15:02:40.663214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.007 qpair failed and we were unable to recover it. 00:25:58.007 [2024-12-11 15:02:40.663350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.007 [2024-12-11 15:02:40.663383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.007 qpair failed and we were unable to recover it. 00:25:58.007 [2024-12-11 15:02:40.663559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.007 [2024-12-11 15:02:40.663588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.007 qpair failed and we were unable to recover it. 00:25:58.007 [2024-12-11 15:02:40.663689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.007 [2024-12-11 15:02:40.663717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.007 qpair failed and we were unable to recover it. 00:25:58.007 [2024-12-11 15:02:40.663814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.007 [2024-12-11 15:02:40.663843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.007 qpair failed and we were unable to recover it. 00:25:58.007 [2024-12-11 15:02:40.663949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.007 [2024-12-11 15:02:40.663981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.007 qpair failed and we were unable to recover it. 00:25:58.007 [2024-12-11 15:02:40.664119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.007 [2024-12-11 15:02:40.664152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.007 qpair failed and we were unable to recover it. 00:25:58.007 [2024-12-11 15:02:40.664255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.007 [2024-12-11 15:02:40.664288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.007 qpair failed and we were unable to recover it. 00:25:58.007 [2024-12-11 15:02:40.664440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.007 [2024-12-11 15:02:40.664483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.007 qpair failed and we were unable to recover it. 00:25:58.007 [2024-12-11 15:02:40.664587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.007 [2024-12-11 15:02:40.664619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.007 qpair failed and we were unable to recover it. 00:25:58.007 [2024-12-11 15:02:40.664760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.007 [2024-12-11 15:02:40.664808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.007 qpair failed and we were unable to recover it. 00:25:58.007 [2024-12-11 15:02:40.664951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.007 [2024-12-11 15:02:40.664998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.007 qpair failed and we were unable to recover it. 00:25:58.007 [2024-12-11 15:02:40.665112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.007 [2024-12-11 15:02:40.665160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.007 qpair failed and we were unable to recover it. 00:25:58.007 [2024-12-11 15:02:40.665316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.007 [2024-12-11 15:02:40.665345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.007 qpair failed and we were unable to recover it. 00:25:58.007 [2024-12-11 15:02:40.665436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.007 [2024-12-11 15:02:40.665464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.007 qpair failed and we were unable to recover it. 00:25:58.007 [2024-12-11 15:02:40.665586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.007 [2024-12-11 15:02:40.665615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.007 qpair failed and we were unable to recover it. 00:25:58.007 [2024-12-11 15:02:40.665776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.007 [2024-12-11 15:02:40.665804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.007 qpair failed and we were unable to recover it. 00:25:58.007 [2024-12-11 15:02:40.665921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.007 [2024-12-11 15:02:40.665969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.007 qpair failed and we were unable to recover it. 00:25:58.007 [2024-12-11 15:02:40.666091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.007 [2024-12-11 15:02:40.666119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.007 qpair failed and we were unable to recover it. 00:25:58.007 [2024-12-11 15:02:40.666245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.007 [2024-12-11 15:02:40.666273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.007 qpair failed and we were unable to recover it. 00:25:58.007 [2024-12-11 15:02:40.666373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.007 [2024-12-11 15:02:40.666401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.007 qpair failed and we were unable to recover it. 00:25:58.007 [2024-12-11 15:02:40.666497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.007 [2024-12-11 15:02:40.666526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.007 qpair failed and we were unable to recover it. 00:25:58.007 [2024-12-11 15:02:40.666647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.007 [2024-12-11 15:02:40.666676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.007 qpair failed and we were unable to recover it. 00:25:58.007 [2024-12-11 15:02:40.666768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.007 [2024-12-11 15:02:40.666796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.007 qpair failed and we were unable to recover it. 00:25:58.007 [2024-12-11 15:02:40.666916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.007 [2024-12-11 15:02:40.666944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.007 qpair failed and we were unable to recover it. 00:25:58.007 [2024-12-11 15:02:40.667039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.007 [2024-12-11 15:02:40.667067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.007 qpair failed and we were unable to recover it. 00:25:58.007 [2024-12-11 15:02:40.667185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.007 [2024-12-11 15:02:40.667219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.007 qpair failed and we were unable to recover it. 00:25:58.007 [2024-12-11 15:02:40.667305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.007 [2024-12-11 15:02:40.667334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.007 qpair failed and we were unable to recover it. 00:25:58.007 [2024-12-11 15:02:40.667458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.007 [2024-12-11 15:02:40.667486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.007 qpair failed and we were unable to recover it. 00:25:58.007 [2024-12-11 15:02:40.667630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.007 [2024-12-11 15:02:40.667673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.007 qpair failed and we were unable to recover it. 00:25:58.007 [2024-12-11 15:02:40.667801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.007 [2024-12-11 15:02:40.667830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.007 qpair failed and we were unable to recover it. 00:25:58.007 [2024-12-11 15:02:40.667922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.007 [2024-12-11 15:02:40.667950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.007 qpair failed and we were unable to recover it. 00:25:58.007 [2024-12-11 15:02:40.668043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.007 [2024-12-11 15:02:40.668072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.007 qpair failed and we were unable to recover it. 00:25:58.007 [2024-12-11 15:02:40.668189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.007 [2024-12-11 15:02:40.668218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.007 qpair failed and we were unable to recover it. 00:25:58.007 [2024-12-11 15:02:40.668304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.007 [2024-12-11 15:02:40.668331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.007 qpair failed and we were unable to recover it. 00:25:58.007 [2024-12-11 15:02:40.668450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.008 [2024-12-11 15:02:40.668478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.008 qpair failed and we were unable to recover it. 00:25:58.008 [2024-12-11 15:02:40.668600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.008 [2024-12-11 15:02:40.668640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.008 qpair failed and we were unable to recover it. 00:25:58.008 [2024-12-11 15:02:40.668737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.008 [2024-12-11 15:02:40.668785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.008 qpair failed and we were unable to recover it. 00:25:58.008 [2024-12-11 15:02:40.668953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.008 [2024-12-11 15:02:40.668999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.008 qpair failed and we were unable to recover it. 00:25:58.008 [2024-12-11 15:02:40.669151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.008 [2024-12-11 15:02:40.669185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.008 qpair failed and we were unable to recover it. 00:25:58.008 [2024-12-11 15:02:40.669329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.008 [2024-12-11 15:02:40.669374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.008 qpair failed and we were unable to recover it. 00:25:58.008 [2024-12-11 15:02:40.669519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.008 [2024-12-11 15:02:40.669554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.008 qpair failed and we were unable to recover it. 00:25:58.008 [2024-12-11 15:02:40.669673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.008 [2024-12-11 15:02:40.669702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.008 qpair failed and we were unable to recover it. 00:25:58.008 [2024-12-11 15:02:40.669865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.008 [2024-12-11 15:02:40.669897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.008 qpair failed and we were unable to recover it. 00:25:58.008 [2024-12-11 15:02:40.670049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.008 [2024-12-11 15:02:40.670077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.008 qpair failed and we were unable to recover it. 00:25:58.008 [2024-12-11 15:02:40.670181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.008 [2024-12-11 15:02:40.670209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.008 qpair failed and we were unable to recover it. 00:25:58.008 [2024-12-11 15:02:40.670357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.008 [2024-12-11 15:02:40.670385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.008 qpair failed and we were unable to recover it. 00:25:58.008 [2024-12-11 15:02:40.670509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.008 [2024-12-11 15:02:40.670540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.008 qpair failed and we were unable to recover it. 00:25:58.008 [2024-12-11 15:02:40.670670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.008 [2024-12-11 15:02:40.670698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.008 qpair failed and we were unable to recover it. 00:25:58.008 [2024-12-11 15:02:40.670838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.008 [2024-12-11 15:02:40.670883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.008 qpair failed and we were unable to recover it. 00:25:58.008 [2024-12-11 15:02:40.671064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.008 [2024-12-11 15:02:40.671112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.008 qpair failed and we were unable to recover it. 00:25:58.008 [2024-12-11 15:02:40.671245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.008 [2024-12-11 15:02:40.671297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.008 qpair failed and we were unable to recover it. 00:25:58.008 [2024-12-11 15:02:40.671391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.008 [2024-12-11 15:02:40.671420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.008 qpair failed and we were unable to recover it. 00:25:58.008 [2024-12-11 15:02:40.671523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.008 [2024-12-11 15:02:40.671557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.008 qpair failed and we were unable to recover it. 00:25:58.008 [2024-12-11 15:02:40.671661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.008 [2024-12-11 15:02:40.671689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.008 qpair failed and we were unable to recover it. 00:25:58.008 [2024-12-11 15:02:40.671783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.008 [2024-12-11 15:02:40.671810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.008 qpair failed and we were unable to recover it. 00:25:58.008 [2024-12-11 15:02:40.671955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.008 [2024-12-11 15:02:40.671983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.008 qpair failed and we were unable to recover it. 00:25:58.008 [2024-12-11 15:02:40.672129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.008 [2024-12-11 15:02:40.672157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.008 qpair failed and we were unable to recover it. 00:25:58.008 [2024-12-11 15:02:40.672251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.008 [2024-12-11 15:02:40.672279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.008 qpair failed and we were unable to recover it. 00:25:58.008 [2024-12-11 15:02:40.672406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.008 [2024-12-11 15:02:40.672436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.008 qpair failed and we were unable to recover it. 00:25:58.008 [2024-12-11 15:02:40.672567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.008 [2024-12-11 15:02:40.672597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.008 qpair failed and we were unable to recover it. 00:25:58.008 [2024-12-11 15:02:40.672703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.008 [2024-12-11 15:02:40.672732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.008 qpair failed and we were unable to recover it. 00:25:58.008 [2024-12-11 15:02:40.672824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.008 [2024-12-11 15:02:40.672852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.008 qpair failed and we were unable to recover it. 00:25:58.008 [2024-12-11 15:02:40.672939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.009 [2024-12-11 15:02:40.672967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.009 qpair failed and we were unable to recover it. 00:25:58.009 [2024-12-11 15:02:40.673059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.009 [2024-12-11 15:02:40.673087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.009 qpair failed and we were unable to recover it. 00:25:58.009 [2024-12-11 15:02:40.673196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.009 [2024-12-11 15:02:40.673244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.009 qpair failed and we were unable to recover it. 00:25:58.009 [2024-12-11 15:02:40.673340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.009 [2024-12-11 15:02:40.673377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.009 qpair failed and we were unable to recover it. 00:25:58.009 [2024-12-11 15:02:40.673504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.009 [2024-12-11 15:02:40.673533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.009 qpair failed and we were unable to recover it. 00:25:58.009 [2024-12-11 15:02:40.673719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.009 [2024-12-11 15:02:40.673766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.009 qpair failed and we were unable to recover it. 00:25:58.009 [2024-12-11 15:02:40.673885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.009 [2024-12-11 15:02:40.673912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.009 qpair failed and we were unable to recover it. 00:25:58.009 [2024-12-11 15:02:40.674003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.009 [2024-12-11 15:02:40.674031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.009 qpair failed and we were unable to recover it. 00:25:58.009 [2024-12-11 15:02:40.674153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.009 [2024-12-11 15:02:40.674182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.009 qpair failed and we were unable to recover it. 00:25:58.009 [2024-12-11 15:02:40.674301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.009 [2024-12-11 15:02:40.674329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.009 qpair failed and we were unable to recover it. 00:25:58.009 [2024-12-11 15:02:40.674423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.009 [2024-12-11 15:02:40.674452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.009 qpair failed and we were unable to recover it. 00:25:58.009 [2024-12-11 15:02:40.674568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.009 [2024-12-11 15:02:40.674597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.009 qpair failed and we were unable to recover it. 00:25:58.009 [2024-12-11 15:02:40.674688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.009 [2024-12-11 15:02:40.674716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.009 qpair failed and we were unable to recover it. 00:25:58.009 [2024-12-11 15:02:40.674810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.009 [2024-12-11 15:02:40.674838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.009 qpair failed and we were unable to recover it. 00:25:58.009 [2024-12-11 15:02:40.675014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.009 [2024-12-11 15:02:40.675049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.009 qpair failed and we were unable to recover it. 00:25:58.009 [2024-12-11 15:02:40.675182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.009 [2024-12-11 15:02:40.675230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.009 qpair failed and we were unable to recover it. 00:25:58.009 [2024-12-11 15:02:40.675354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.009 [2024-12-11 15:02:40.675382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.009 qpair failed and we were unable to recover it. 00:25:58.009 [2024-12-11 15:02:40.675505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.009 [2024-12-11 15:02:40.675533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.009 qpair failed and we were unable to recover it. 00:25:58.009 [2024-12-11 15:02:40.675654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.009 [2024-12-11 15:02:40.675704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.009 qpair failed and we were unable to recover it. 00:25:58.009 [2024-12-11 15:02:40.675851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.009 [2024-12-11 15:02:40.675898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.009 qpair failed and we were unable to recover it. 00:25:58.009 [2024-12-11 15:02:40.676011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.009 [2024-12-11 15:02:40.676039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.009 qpair failed and we were unable to recover it. 00:25:58.009 [2024-12-11 15:02:40.676125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.009 [2024-12-11 15:02:40.676154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.009 qpair failed and we were unable to recover it. 00:25:58.009 [2024-12-11 15:02:40.676237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.009 [2024-12-11 15:02:40.676265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.009 qpair failed and we were unable to recover it. 00:25:58.009 [2024-12-11 15:02:40.676411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.009 [2024-12-11 15:02:40.676439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.009 qpair failed and we were unable to recover it. 00:25:58.009 [2024-12-11 15:02:40.676532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.009 [2024-12-11 15:02:40.676572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.009 qpair failed and we were unable to recover it. 00:25:58.009 [2024-12-11 15:02:40.676696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.009 [2024-12-11 15:02:40.676724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.009 qpair failed and we were unable to recover it. 00:25:58.009 [2024-12-11 15:02:40.676817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.009 [2024-12-11 15:02:40.676845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.009 qpair failed and we were unable to recover it. 00:25:58.009 [2024-12-11 15:02:40.676971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.009 [2024-12-11 15:02:40.677000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.009 qpair failed and we were unable to recover it. 00:25:58.009 [2024-12-11 15:02:40.677091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.009 [2024-12-11 15:02:40.677120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.009 qpair failed and we were unable to recover it. 00:25:58.009 [2024-12-11 15:02:40.677234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.009 [2024-12-11 15:02:40.677262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.009 qpair failed and we were unable to recover it. 00:25:58.009 [2024-12-11 15:02:40.677382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.009 [2024-12-11 15:02:40.677410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.009 qpair failed and we were unable to recover it. 00:25:58.009 [2024-12-11 15:02:40.677501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.009 [2024-12-11 15:02:40.677528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.009 qpair failed and we were unable to recover it. 00:25:58.009 [2024-12-11 15:02:40.677618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.009 [2024-12-11 15:02:40.677646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.009 qpair failed and we were unable to recover it. 00:25:58.009 [2024-12-11 15:02:40.677767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.009 [2024-12-11 15:02:40.677795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.009 qpair failed and we were unable to recover it. 00:25:58.009 [2024-12-11 15:02:40.677914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.009 [2024-12-11 15:02:40.677941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.009 qpair failed and we were unable to recover it. 00:25:58.009 [2024-12-11 15:02:40.678061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.009 [2024-12-11 15:02:40.678090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.009 qpair failed and we were unable to recover it. 00:25:58.009 [2024-12-11 15:02:40.678210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.009 [2024-12-11 15:02:40.678238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.009 qpair failed and we were unable to recover it. 00:25:58.009 [2024-12-11 15:02:40.678361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.009 [2024-12-11 15:02:40.678388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.009 qpair failed and we were unable to recover it. 00:25:58.009 [2024-12-11 15:02:40.678518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.009 [2024-12-11 15:02:40.678566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.009 qpair failed and we were unable to recover it. 00:25:58.009 [2024-12-11 15:02:40.678699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.009 [2024-12-11 15:02:40.678729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.009 qpair failed and we were unable to recover it. 00:25:58.009 [2024-12-11 15:02:40.678820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.009 [2024-12-11 15:02:40.678849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.009 qpair failed and we were unable to recover it. 00:25:58.010 [2024-12-11 15:02:40.678968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.010 [2024-12-11 15:02:40.678996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.010 qpair failed and we were unable to recover it. 00:25:58.010 [2024-12-11 15:02:40.679099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.010 [2024-12-11 15:02:40.679127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.010 qpair failed and we were unable to recover it. 00:25:58.010 [2024-12-11 15:02:40.679242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.010 [2024-12-11 15:02:40.679279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.010 qpair failed and we were unable to recover it. 00:25:58.010 [2024-12-11 15:02:40.679368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.010 [2024-12-11 15:02:40.679398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.010 qpair failed and we were unable to recover it. 00:25:58.010 [2024-12-11 15:02:40.679521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.010 [2024-12-11 15:02:40.679555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.010 qpair failed and we were unable to recover it. 00:25:58.010 [2024-12-11 15:02:40.679644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.010 [2024-12-11 15:02:40.679672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.010 qpair failed and we were unable to recover it. 00:25:58.010 [2024-12-11 15:02:40.679786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.010 [2024-12-11 15:02:40.679833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.010 qpair failed and we were unable to recover it. 00:25:58.010 [2024-12-11 15:02:40.679956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.010 [2024-12-11 15:02:40.679984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.010 qpair failed and we were unable to recover it. 00:25:58.010 [2024-12-11 15:02:40.680072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.010 [2024-12-11 15:02:40.680100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.010 qpair failed and we were unable to recover it. 00:25:58.010 [2024-12-11 15:02:40.680189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.010 [2024-12-11 15:02:40.680216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.010 qpair failed and we were unable to recover it. 00:25:58.010 [2024-12-11 15:02:40.680332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.010 [2024-12-11 15:02:40.680360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.010 qpair failed and we were unable to recover it. 00:25:58.010 [2024-12-11 15:02:40.680480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.010 [2024-12-11 15:02:40.680507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.010 qpair failed and we were unable to recover it. 00:25:58.010 [2024-12-11 15:02:40.680620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.010 [2024-12-11 15:02:40.680651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.010 qpair failed and we were unable to recover it. 00:25:58.010 [2024-12-11 15:02:40.680748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.010 [2024-12-11 15:02:40.680775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.010 qpair failed and we were unable to recover it. 00:25:58.010 [2024-12-11 15:02:40.680911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.010 [2024-12-11 15:02:40.680944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.010 qpair failed and we were unable to recover it. 00:25:58.010 [2024-12-11 15:02:40.681079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.010 [2024-12-11 15:02:40.681112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.010 qpair failed and we were unable to recover it. 00:25:58.010 [2024-12-11 15:02:40.681243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.010 [2024-12-11 15:02:40.681279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.010 qpair failed and we were unable to recover it. 00:25:58.010 [2024-12-11 15:02:40.681452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.010 [2024-12-11 15:02:40.681486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.010 qpair failed and we were unable to recover it. 00:25:58.010 [2024-12-11 15:02:40.681644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.010 [2024-12-11 15:02:40.681679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.010 qpair failed and we were unable to recover it. 00:25:58.010 [2024-12-11 15:02:40.681813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.010 [2024-12-11 15:02:40.681845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.010 qpair failed and we were unable to recover it. 00:25:58.010 [2024-12-11 15:02:40.681974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.010 [2024-12-11 15:02:40.682007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.010 qpair failed and we were unable to recover it. 00:25:58.010 [2024-12-11 15:02:40.682164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.010 [2024-12-11 15:02:40.682213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.010 qpair failed and we were unable to recover it. 00:25:58.010 [2024-12-11 15:02:40.682326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.010 [2024-12-11 15:02:40.682355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.010 qpair failed and we were unable to recover it. 00:25:58.010 [2024-12-11 15:02:40.682451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.010 [2024-12-11 15:02:40.682479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.010 qpair failed and we were unable to recover it. 00:25:58.010 [2024-12-11 15:02:40.682618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.010 [2024-12-11 15:02:40.682648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.010 qpair failed and we were unable to recover it. 00:25:58.010 [2024-12-11 15:02:40.682742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.010 [2024-12-11 15:02:40.682770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.010 qpair failed and we were unable to recover it. 00:25:58.010 [2024-12-11 15:02:40.682919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.010 [2024-12-11 15:02:40.682947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.010 qpair failed and we were unable to recover it. 00:25:58.010 [2024-12-11 15:02:40.683055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.010 [2024-12-11 15:02:40.683083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.010 qpair failed and we were unable to recover it. 00:25:58.010 [2024-12-11 15:02:40.683178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.010 [2024-12-11 15:02:40.683205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.010 qpair failed and we were unable to recover it. 00:25:58.010 [2024-12-11 15:02:40.683358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.010 [2024-12-11 15:02:40.683386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.010 qpair failed and we were unable to recover it. 00:25:58.010 [2024-12-11 15:02:40.683504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.010 [2024-12-11 15:02:40.683534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.010 qpair failed and we were unable to recover it. 00:25:58.010 [2024-12-11 15:02:40.683659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.010 [2024-12-11 15:02:40.683688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.010 qpair failed and we were unable to recover it. 00:25:58.010 [2024-12-11 15:02:40.683788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.010 [2024-12-11 15:02:40.683816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.010 qpair failed and we were unable to recover it. 00:25:58.010 [2024-12-11 15:02:40.683933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.010 [2024-12-11 15:02:40.683961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.010 qpair failed and we were unable to recover it. 00:25:58.010 [2024-12-11 15:02:40.684053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.010 [2024-12-11 15:02:40.684081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.010 qpair failed and we were unable to recover it. 00:25:58.010 [2024-12-11 15:02:40.684204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.010 [2024-12-11 15:02:40.684231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.010 qpair failed and we were unable to recover it. 00:25:58.010 [2024-12-11 15:02:40.684356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.010 [2024-12-11 15:02:40.684384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.010 qpair failed and we were unable to recover it. 00:25:58.010 [2024-12-11 15:02:40.684473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.010 [2024-12-11 15:02:40.684500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.010 qpair failed and we were unable to recover it. 00:25:58.010 [2024-12-11 15:02:40.684635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.010 [2024-12-11 15:02:40.684663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.010 qpair failed and we were unable to recover it. 00:25:58.010 [2024-12-11 15:02:40.684762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.010 [2024-12-11 15:02:40.684789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.010 qpair failed and we were unable to recover it. 00:25:58.010 [2024-12-11 15:02:40.684883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.010 [2024-12-11 15:02:40.684910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.010 qpair failed and we were unable to recover it. 00:25:58.010 [2024-12-11 15:02:40.685053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.010 [2024-12-11 15:02:40.685080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.010 qpair failed and we were unable to recover it. 00:25:58.011 [2024-12-11 15:02:40.685196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.011 [2024-12-11 15:02:40.685228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.011 qpair failed and we were unable to recover it. 00:25:58.011 [2024-12-11 15:02:40.685319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.011 [2024-12-11 15:02:40.685347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.011 qpair failed and we were unable to recover it. 00:25:58.011 [2024-12-11 15:02:40.685469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.011 [2024-12-11 15:02:40.685498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.011 qpair failed and we were unable to recover it. 00:25:58.011 [2024-12-11 15:02:40.685622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.011 [2024-12-11 15:02:40.685650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.011 qpair failed and we were unable to recover it. 00:25:58.011 [2024-12-11 15:02:40.685743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.011 [2024-12-11 15:02:40.685773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.011 qpair failed and we were unable to recover it. 00:25:58.011 [2024-12-11 15:02:40.685890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.011 [2024-12-11 15:02:40.685919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.011 qpair failed and we were unable to recover it. 00:25:58.011 [2024-12-11 15:02:40.686012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.011 [2024-12-11 15:02:40.686040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.011 qpair failed and we were unable to recover it. 00:25:58.011 [2024-12-11 15:02:40.686134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.011 [2024-12-11 15:02:40.686163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.011 qpair failed and we were unable to recover it. 00:25:58.011 [2024-12-11 15:02:40.686261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.011 [2024-12-11 15:02:40.686289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.011 qpair failed and we were unable to recover it. 00:25:58.011 [2024-12-11 15:02:40.686436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.011 [2024-12-11 15:02:40.686463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.011 qpair failed and we were unable to recover it. 00:25:58.011 [2024-12-11 15:02:40.686579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.011 [2024-12-11 15:02:40.686608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.011 qpair failed and we were unable to recover it. 00:25:58.011 [2024-12-11 15:02:40.686728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.011 [2024-12-11 15:02:40.686757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.011 qpair failed and we were unable to recover it. 00:25:58.011 [2024-12-11 15:02:40.686877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.011 [2024-12-11 15:02:40.686906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.011 qpair failed and we were unable to recover it. 00:25:58.011 [2024-12-11 15:02:40.687003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.011 [2024-12-11 15:02:40.687031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.011 qpair failed and we were unable to recover it. 00:25:58.011 [2024-12-11 15:02:40.687189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.011 [2024-12-11 15:02:40.687217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.011 qpair failed and we were unable to recover it. 00:25:58.011 [2024-12-11 15:02:40.687341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.011 [2024-12-11 15:02:40.687368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.011 qpair failed and we were unable to recover it. 00:25:58.011 [2024-12-11 15:02:40.687461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.011 [2024-12-11 15:02:40.687490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.011 qpair failed and we were unable to recover it. 00:25:58.011 [2024-12-11 15:02:40.687608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.011 [2024-12-11 15:02:40.687637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.011 qpair failed and we were unable to recover it. 00:25:58.011 [2024-12-11 15:02:40.687727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.011 [2024-12-11 15:02:40.687754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.011 qpair failed and we were unable to recover it. 00:25:58.011 [2024-12-11 15:02:40.687878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.011 [2024-12-11 15:02:40.687906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.011 qpair failed and we were unable to recover it. 00:25:58.011 [2024-12-11 15:02:40.688031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.011 [2024-12-11 15:02:40.688058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.011 qpair failed and we were unable to recover it. 00:25:58.011 [2024-12-11 15:02:40.688203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.011 [2024-12-11 15:02:40.688231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.011 qpair failed and we were unable to recover it. 00:25:58.011 [2024-12-11 15:02:40.688356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.011 [2024-12-11 15:02:40.688384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.011 qpair failed and we were unable to recover it. 00:25:58.011 [2024-12-11 15:02:40.688531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.011 [2024-12-11 15:02:40.688566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.011 qpair failed and we were unable to recover it. 00:25:58.011 [2024-12-11 15:02:40.688737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.011 [2024-12-11 15:02:40.688765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.011 qpair failed and we were unable to recover it. 00:25:58.011 [2024-12-11 15:02:40.688849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.011 [2024-12-11 15:02:40.688876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.011 qpair failed and we were unable to recover it. 00:25:58.011 [2024-12-11 15:02:40.689015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.011 [2024-12-11 15:02:40.689062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.011 qpair failed and we were unable to recover it. 00:25:58.011 [2024-12-11 15:02:40.689195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.011 [2024-12-11 15:02:40.689237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.011 qpair failed and we were unable to recover it. 00:25:58.011 [2024-12-11 15:02:40.689365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.011 [2024-12-11 15:02:40.689395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.011 qpair failed and we were unable to recover it. 00:25:58.011 [2024-12-11 15:02:40.689482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.011 [2024-12-11 15:02:40.689510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.011 qpair failed and we were unable to recover it. 00:25:58.011 [2024-12-11 15:02:40.689669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.011 [2024-12-11 15:02:40.689704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.011 qpair failed and we were unable to recover it. 00:25:58.011 [2024-12-11 15:02:40.689818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.011 [2024-12-11 15:02:40.689866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.011 qpair failed and we were unable to recover it. 00:25:58.011 [2024-12-11 15:02:40.689973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.011 [2024-12-11 15:02:40.690006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.011 qpair failed and we were unable to recover it. 00:25:58.011 [2024-12-11 15:02:40.690100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.011 [2024-12-11 15:02:40.690131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.011 qpair failed and we were unable to recover it. 00:25:58.011 [2024-12-11 15:02:40.690232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.011 [2024-12-11 15:02:40.690263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.011 qpair failed and we were unable to recover it. 00:25:58.011 [2024-12-11 15:02:40.690353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.011 [2024-12-11 15:02:40.690384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.011 qpair failed and we were unable to recover it. 00:25:58.011 [2024-12-11 15:02:40.690498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.011 [2024-12-11 15:02:40.690528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.011 qpair failed and we were unable to recover it. 00:25:58.011 [2024-12-11 15:02:40.690664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.011 [2024-12-11 15:02:40.690694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.011 qpair failed and we were unable to recover it. 00:25:58.011 [2024-12-11 15:02:40.690838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.011 [2024-12-11 15:02:40.690883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.011 qpair failed and we were unable to recover it. 00:25:58.011 [2024-12-11 15:02:40.690994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.011 [2024-12-11 15:02:40.691023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.011 qpair failed and we were unable to recover it. 00:25:58.011 [2024-12-11 15:02:40.691148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.011 [2024-12-11 15:02:40.691182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.011 qpair failed and we were unable to recover it. 00:25:58.011 [2024-12-11 15:02:40.691306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.011 [2024-12-11 15:02:40.691334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.011 qpair failed and we were unable to recover it. 00:25:58.011 [2024-12-11 15:02:40.691458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.011 [2024-12-11 15:02:40.691487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.011 qpair failed and we were unable to recover it. 00:25:58.011 [2024-12-11 15:02:40.691636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.011 [2024-12-11 15:02:40.691665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.011 qpair failed and we were unable to recover it. 00:25:58.011 [2024-12-11 15:02:40.691779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.012 [2024-12-11 15:02:40.691806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.012 qpair failed and we were unable to recover it. 00:25:58.012 [2024-12-11 15:02:40.691908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.012 [2024-12-11 15:02:40.691952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.012 qpair failed and we were unable to recover it. 00:25:58.012 [2024-12-11 15:02:40.692068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.012 [2024-12-11 15:02:40.692115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.012 qpair failed and we were unable to recover it. 00:25:58.012 [2024-12-11 15:02:40.692237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.012 [2024-12-11 15:02:40.692268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.012 qpair failed and we were unable to recover it. 00:25:58.012 [2024-12-11 15:02:40.692409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.012 [2024-12-11 15:02:40.692436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.012 qpair failed and we were unable to recover it. 00:25:58.012 [2024-12-11 15:02:40.692572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.012 [2024-12-11 15:02:40.692601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.012 qpair failed and we were unable to recover it. 00:25:58.012 [2024-12-11 15:02:40.692720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.012 [2024-12-11 15:02:40.692747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.012 qpair failed and we were unable to recover it. 00:25:58.012 [2024-12-11 15:02:40.692890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.012 [2024-12-11 15:02:40.692923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.012 qpair failed and we were unable to recover it. 00:25:58.012 [2024-12-11 15:02:40.693084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.012 [2024-12-11 15:02:40.693116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.012 qpair failed and we were unable to recover it. 00:25:58.012 [2024-12-11 15:02:40.693229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.012 [2024-12-11 15:02:40.693260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.012 qpair failed and we were unable to recover it. 00:25:58.012 [2024-12-11 15:02:40.693388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.012 [2024-12-11 15:02:40.693418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.012 qpair failed and we were unable to recover it. 00:25:58.012 [2024-12-11 15:02:40.693516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.012 [2024-12-11 15:02:40.693550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.012 qpair failed and we were unable to recover it. 00:25:58.012 [2024-12-11 15:02:40.693681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.012 [2024-12-11 15:02:40.693708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.012 qpair failed and we were unable to recover it. 00:25:58.012 [2024-12-11 15:02:40.693850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.012 [2024-12-11 15:02:40.693895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.012 qpair failed and we were unable to recover it. 00:25:58.012 [2024-12-11 15:02:40.694066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.012 [2024-12-11 15:02:40.694113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.012 qpair failed and we were unable to recover it. 00:25:58.012 [2024-12-11 15:02:40.694207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.012 [2024-12-11 15:02:40.694236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.012 qpair failed and we were unable to recover it. 00:25:58.012 [2024-12-11 15:02:40.694353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.012 [2024-12-11 15:02:40.694380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.012 qpair failed and we were unable to recover it. 00:25:58.012 [2024-12-11 15:02:40.694507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.012 [2024-12-11 15:02:40.694535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.012 qpair failed and we were unable to recover it. 00:25:58.012 [2024-12-11 15:02:40.694628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.012 [2024-12-11 15:02:40.694656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.012 qpair failed and we were unable to recover it. 00:25:58.012 [2024-12-11 15:02:40.694755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.012 [2024-12-11 15:02:40.694783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.012 qpair failed and we were unable to recover it. 00:25:58.012 [2024-12-11 15:02:40.694905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.012 [2024-12-11 15:02:40.694933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.012 qpair failed and we were unable to recover it. 00:25:58.012 [2024-12-11 15:02:40.695054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.012 [2024-12-11 15:02:40.695082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.012 qpair failed and we were unable to recover it. 00:25:58.012 [2024-12-11 15:02:40.695209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.012 [2024-12-11 15:02:40.695239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.012 qpair failed and we were unable to recover it. 00:25:58.012 [2024-12-11 15:02:40.695337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.012 [2024-12-11 15:02:40.695365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.012 qpair failed and we were unable to recover it. 00:25:58.012 [2024-12-11 15:02:40.695488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.012 [2024-12-11 15:02:40.695516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.012 qpair failed and we were unable to recover it. 00:25:58.012 [2024-12-11 15:02:40.695614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.012 [2024-12-11 15:02:40.695662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.012 qpair failed and we were unable to recover it. 00:25:58.012 [2024-12-11 15:02:40.695800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.012 [2024-12-11 15:02:40.695833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.012 qpair failed and we were unable to recover it. 00:25:58.012 [2024-12-11 15:02:40.696003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.012 [2024-12-11 15:02:40.696035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.012 qpair failed and we were unable to recover it. 00:25:58.012 [2024-12-11 15:02:40.696140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.012 [2024-12-11 15:02:40.696172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.012 qpair failed and we were unable to recover it. 00:25:58.012 [2024-12-11 15:02:40.696315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.012 [2024-12-11 15:02:40.696347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.012 qpair failed and we were unable to recover it. 00:25:58.012 [2024-12-11 15:02:40.696455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.012 [2024-12-11 15:02:40.696489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.012 qpair failed and we were unable to recover it. 00:25:58.012 [2024-12-11 15:02:40.696641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.012 [2024-12-11 15:02:40.696670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.012 qpair failed and we were unable to recover it. 00:25:58.012 [2024-12-11 15:02:40.696782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.012 [2024-12-11 15:02:40.696810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.012 qpair failed and we were unable to recover it. 00:25:58.012 [2024-12-11 15:02:40.696959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.012 [2024-12-11 15:02:40.696992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.012 qpair failed and we were unable to recover it. 00:25:58.012 [2024-12-11 15:02:40.697110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.012 [2024-12-11 15:02:40.697158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.012 qpair failed and we were unable to recover it. 00:25:58.012 [2024-12-11 15:02:40.697272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.012 [2024-12-11 15:02:40.697305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.012 qpair failed and we were unable to recover it. 00:25:58.012 [2024-12-11 15:02:40.697434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.012 [2024-12-11 15:02:40.697487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.012 qpair failed and we were unable to recover it. 00:25:58.012 [2024-12-11 15:02:40.697582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.012 [2024-12-11 15:02:40.697610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.012 qpair failed and we were unable to recover it. 00:25:58.012 [2024-12-11 15:02:40.697730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.012 [2024-12-11 15:02:40.697758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.012 qpair failed and we were unable to recover it. 00:25:58.012 [2024-12-11 15:02:40.697896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.012 [2024-12-11 15:02:40.697928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.012 qpair failed and we were unable to recover it. 00:25:58.012 [2024-12-11 15:02:40.698042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.013 [2024-12-11 15:02:40.698074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.013 qpair failed and we were unable to recover it. 00:25:58.013 [2024-12-11 15:02:40.698226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.013 [2024-12-11 15:02:40.698258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.013 qpair failed and we were unable to recover it. 00:25:58.013 [2024-12-11 15:02:40.698392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.013 [2024-12-11 15:02:40.698427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.013 qpair failed and we were unable to recover it. 00:25:58.013 [2024-12-11 15:02:40.698589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.013 [2024-12-11 15:02:40.698631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.013 qpair failed and we were unable to recover it. 00:25:58.013 [2024-12-11 15:02:40.698789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.013 [2024-12-11 15:02:40.698819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.013 qpair failed and we were unable to recover it. 00:25:58.013 [2024-12-11 15:02:40.698913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.013 [2024-12-11 15:02:40.698943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.013 qpair failed and we were unable to recover it. 00:25:58.013 [2024-12-11 15:02:40.699050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.013 [2024-12-11 15:02:40.699083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.013 qpair failed and we were unable to recover it. 00:25:58.013 [2024-12-11 15:02:40.699211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.013 [2024-12-11 15:02:40.699245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.013 qpair failed and we were unable to recover it. 00:25:58.013 [2024-12-11 15:02:40.699409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.013 [2024-12-11 15:02:40.699437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.013 qpair failed and we were unable to recover it. 00:25:58.013 [2024-12-11 15:02:40.699564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.013 [2024-12-11 15:02:40.699593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.013 qpair failed and we were unable to recover it. 00:25:58.013 [2024-12-11 15:02:40.699695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.013 [2024-12-11 15:02:40.699724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.013 qpair failed and we were unable to recover it. 00:25:58.013 [2024-12-11 15:02:40.699806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.013 [2024-12-11 15:02:40.699834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.013 qpair failed and we were unable to recover it. 00:25:58.013 [2024-12-11 15:02:40.699980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.013 [2024-12-11 15:02:40.700008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.013 qpair failed and we were unable to recover it. 00:25:58.013 [2024-12-11 15:02:40.700148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.013 [2024-12-11 15:02:40.700176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.013 qpair failed and we were unable to recover it. 00:25:58.013 [2024-12-11 15:02:40.700268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.013 [2024-12-11 15:02:40.700295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.013 qpair failed and we were unable to recover it. 00:25:58.013 [2024-12-11 15:02:40.700419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.013 [2024-12-11 15:02:40.700446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.013 qpair failed and we were unable to recover it. 00:25:58.013 [2024-12-11 15:02:40.700574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.013 [2024-12-11 15:02:40.700603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.013 qpair failed and we were unable to recover it. 00:25:58.013 [2024-12-11 15:02:40.700764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.013 [2024-12-11 15:02:40.700792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.013 qpair failed and we were unable to recover it. 00:25:58.013 [2024-12-11 15:02:40.700896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.013 [2024-12-11 15:02:40.700928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.013 qpair failed and we were unable to recover it. 00:25:58.013 [2024-12-11 15:02:40.701028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.013 [2024-12-11 15:02:40.701060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.013 qpair failed and we were unable to recover it. 00:25:58.013 [2024-12-11 15:02:40.701225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.013 [2024-12-11 15:02:40.701256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.013 qpair failed and we were unable to recover it. 00:25:58.013 [2024-12-11 15:02:40.701389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.013 [2024-12-11 15:02:40.701417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.013 qpair failed and we were unable to recover it. 00:25:58.013 [2024-12-11 15:02:40.701512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.013 [2024-12-11 15:02:40.701540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.013 qpair failed and we were unable to recover it. 00:25:58.013 [2024-12-11 15:02:40.701670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.013 [2024-12-11 15:02:40.701698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.013 qpair failed and we were unable to recover it. 00:25:58.013 [2024-12-11 15:02:40.701808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.013 [2024-12-11 15:02:40.701839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.013 qpair failed and we were unable to recover it. 00:25:58.013 [2024-12-11 15:02:40.701937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.013 [2024-12-11 15:02:40.701968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.013 qpair failed and we were unable to recover it. 00:25:58.013 [2024-12-11 15:02:40.702128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.013 [2024-12-11 15:02:40.702159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.013 qpair failed and we were unable to recover it. 00:25:58.013 [2024-12-11 15:02:40.702276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.013 [2024-12-11 15:02:40.702308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.013 qpair failed and we were unable to recover it. 00:25:58.013 [2024-12-11 15:02:40.702405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.013 [2024-12-11 15:02:40.702435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.013 qpair failed and we were unable to recover it. 00:25:58.013 [2024-12-11 15:02:40.702553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.013 [2024-12-11 15:02:40.702582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.013 qpair failed and we were unable to recover it. 00:25:58.013 [2024-12-11 15:02:40.702701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.013 [2024-12-11 15:02:40.702748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.013 qpair failed and we were unable to recover it. 00:25:58.013 [2024-12-11 15:02:40.702880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.013 [2024-12-11 15:02:40.702926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.013 qpair failed and we were unable to recover it. 00:25:58.013 [2024-12-11 15:02:40.703071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.013 [2024-12-11 15:02:40.703116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.013 qpair failed and we were unable to recover it. 00:25:58.013 [2024-12-11 15:02:40.703237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.013 [2024-12-11 15:02:40.703266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.013 qpair failed and we were unable to recover it. 00:25:58.013 [2024-12-11 15:02:40.703352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.013 [2024-12-11 15:02:40.703379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.013 qpair failed and we were unable to recover it. 00:25:58.013 [2024-12-11 15:02:40.703473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.013 [2024-12-11 15:02:40.703501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.013 qpair failed and we were unable to recover it. 00:25:58.013 [2024-12-11 15:02:40.703605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.013 [2024-12-11 15:02:40.703639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.013 qpair failed and we were unable to recover it. 00:25:58.013 [2024-12-11 15:02:40.703767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.013 [2024-12-11 15:02:40.703795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.013 qpair failed and we were unable to recover it. 00:25:58.013 [2024-12-11 15:02:40.703887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.013 [2024-12-11 15:02:40.703915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.013 qpair failed and we were unable to recover it. 00:25:58.013 [2024-12-11 15:02:40.704041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.013 [2024-12-11 15:02:40.704071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.013 qpair failed and we were unable to recover it. 00:25:58.013 [2024-12-11 15:02:40.704210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.013 [2024-12-11 15:02:40.704258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.013 qpair failed and we were unable to recover it. 00:25:58.013 [2024-12-11 15:02:40.704381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.013 [2024-12-11 15:02:40.704409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.013 qpair failed and we were unable to recover it. 00:25:58.013 [2024-12-11 15:02:40.704531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.013 [2024-12-11 15:02:40.704565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.013 qpair failed and we were unable to recover it. 00:25:58.013 [2024-12-11 15:02:40.704672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.013 [2024-12-11 15:02:40.704703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.013 qpair failed and we were unable to recover it. 00:25:58.013 [2024-12-11 15:02:40.704809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.013 [2024-12-11 15:02:40.704837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.013 qpair failed and we were unable to recover it. 00:25:58.013 [2024-12-11 15:02:40.704937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.014 [2024-12-11 15:02:40.704966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.014 qpair failed and we were unable to recover it. 00:25:58.014 [2024-12-11 15:02:40.705096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.014 [2024-12-11 15:02:40.705124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.014 qpair failed and we were unable to recover it. 00:25:58.014 [2024-12-11 15:02:40.705212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.014 [2024-12-11 15:02:40.705239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.014 qpair failed and we were unable to recover it. 00:25:58.014 [2024-12-11 15:02:40.705324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.014 [2024-12-11 15:02:40.705353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.014 qpair failed and we were unable to recover it. 00:25:58.014 [2024-12-11 15:02:40.705502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.014 [2024-12-11 15:02:40.705529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.014 qpair failed and we were unable to recover it. 00:25:58.014 [2024-12-11 15:02:40.705632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.014 [2024-12-11 15:02:40.705660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.014 qpair failed and we were unable to recover it. 00:25:58.014 [2024-12-11 15:02:40.705779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.014 [2024-12-11 15:02:40.705807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.014 qpair failed and we were unable to recover it. 00:25:58.014 [2024-12-11 15:02:40.705894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.014 [2024-12-11 15:02:40.705921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.014 qpair failed and we were unable to recover it. 00:25:58.014 [2024-12-11 15:02:40.705997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.014 [2024-12-11 15:02:40.706024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.014 qpair failed and we were unable to recover it. 00:25:58.014 [2024-12-11 15:02:40.706161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.014 [2024-12-11 15:02:40.706192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.014 qpair failed and we were unable to recover it. 00:25:58.014 [2024-12-11 15:02:40.706318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.014 [2024-12-11 15:02:40.706363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.014 qpair failed and we were unable to recover it. 00:25:58.014 [2024-12-11 15:02:40.706453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.014 [2024-12-11 15:02:40.706480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.014 qpair failed and we were unable to recover it. 00:25:58.014 [2024-12-11 15:02:40.706562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.014 [2024-12-11 15:02:40.706591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.014 qpair failed and we were unable to recover it. 00:25:58.014 [2024-12-11 15:02:40.706718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.014 [2024-12-11 15:02:40.706746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.014 qpair failed and we were unable to recover it. 00:25:58.014 [2024-12-11 15:02:40.706921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.014 [2024-12-11 15:02:40.706952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.014 qpair failed and we were unable to recover it. 00:25:58.014 [2024-12-11 15:02:40.707082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.014 [2024-12-11 15:02:40.707128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.014 qpair failed and we were unable to recover it. 00:25:58.299 [2024-12-11 15:02:40.707235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.299 [2024-12-11 15:02:40.707267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.299 qpair failed and we were unable to recover it. 00:25:58.299 [2024-12-11 15:02:40.707400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.299 [2024-12-11 15:02:40.707431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.299 qpair failed and we were unable to recover it. 00:25:58.299 [2024-12-11 15:02:40.707571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.299 [2024-12-11 15:02:40.707614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.299 qpair failed and we were unable to recover it. 00:25:58.299 [2024-12-11 15:02:40.707715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.299 [2024-12-11 15:02:40.707746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.299 qpair failed and we were unable to recover it. 00:25:58.299 [2024-12-11 15:02:40.707886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.299 [2024-12-11 15:02:40.707933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.299 qpair failed and we were unable to recover it. 00:25:58.299 [2024-12-11 15:02:40.708037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.299 [2024-12-11 15:02:40.708067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.299 qpair failed and we were unable to recover it. 00:25:58.299 [2024-12-11 15:02:40.708175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.299 [2024-12-11 15:02:40.708223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.299 qpair failed and we were unable to recover it. 00:25:58.299 [2024-12-11 15:02:40.708315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.299 [2024-12-11 15:02:40.708343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.299 qpair failed and we were unable to recover it. 00:25:58.299 [2024-12-11 15:02:40.708458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.299 [2024-12-11 15:02:40.708486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.299 qpair failed and we were unable to recover it. 00:25:58.299 [2024-12-11 15:02:40.708581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.299 [2024-12-11 15:02:40.708610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.299 qpair failed and we were unable to recover it. 00:25:58.299 [2024-12-11 15:02:40.708714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.299 [2024-12-11 15:02:40.708742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.299 qpair failed and we were unable to recover it. 00:25:58.299 [2024-12-11 15:02:40.708833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.299 [2024-12-11 15:02:40.708861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.299 qpair failed and we were unable to recover it. 00:25:58.299 [2024-12-11 15:02:40.708966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.299 [2024-12-11 15:02:40.708994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.299 qpair failed and we were unable to recover it. 00:25:58.299 [2024-12-11 15:02:40.709085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.299 [2024-12-11 15:02:40.709113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.299 qpair failed and we were unable to recover it. 00:25:58.299 [2024-12-11 15:02:40.709218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.299 [2024-12-11 15:02:40.709249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.299 qpair failed and we were unable to recover it. 00:25:58.299 [2024-12-11 15:02:40.709340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.299 [2024-12-11 15:02:40.709372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.299 qpair failed and we were unable to recover it. 00:25:58.299 [2024-12-11 15:02:40.709459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.299 [2024-12-11 15:02:40.709487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.299 qpair failed and we were unable to recover it. 00:25:58.299 [2024-12-11 15:02:40.709574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.299 [2024-12-11 15:02:40.709603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.299 qpair failed and we were unable to recover it. 00:25:58.299 [2024-12-11 15:02:40.709686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.299 [2024-12-11 15:02:40.709714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.299 qpair failed and we were unable to recover it. 00:25:58.299 [2024-12-11 15:02:40.709826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.299 [2024-12-11 15:02:40.709853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.300 qpair failed and we were unable to recover it. 00:25:58.300 [2024-12-11 15:02:40.709963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.300 [2024-12-11 15:02:40.709991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.300 qpair failed and we were unable to recover it. 00:25:58.300 [2024-12-11 15:02:40.710079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.300 [2024-12-11 15:02:40.710106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.300 qpair failed and we were unable to recover it. 00:25:58.300 [2024-12-11 15:02:40.710230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.300 [2024-12-11 15:02:40.710263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.300 qpair failed and we were unable to recover it. 00:25:58.300 [2024-12-11 15:02:40.710377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.300 [2024-12-11 15:02:40.710404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.300 qpair failed and we were unable to recover it. 00:25:58.300 [2024-12-11 15:02:40.710524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.300 [2024-12-11 15:02:40.710560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.300 qpair failed and we were unable to recover it. 00:25:58.300 [2024-12-11 15:02:40.710668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.300 [2024-12-11 15:02:40.710696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.300 qpair failed and we were unable to recover it. 00:25:58.300 [2024-12-11 15:02:40.710807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.300 [2024-12-11 15:02:40.710839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.300 qpair failed and we were unable to recover it. 00:25:58.300 [2024-12-11 15:02:40.710945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.300 [2024-12-11 15:02:40.710978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.300 qpair failed and we were unable to recover it. 00:25:58.300 [2024-12-11 15:02:40.711113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.300 [2024-12-11 15:02:40.711145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.300 qpair failed and we were unable to recover it. 00:25:58.300 [2024-12-11 15:02:40.711287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.300 [2024-12-11 15:02:40.711321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.300 qpair failed and we were unable to recover it. 00:25:58.300 [2024-12-11 15:02:40.711440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.300 [2024-12-11 15:02:40.711473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.300 qpair failed and we were unable to recover it. 00:25:58.300 [2024-12-11 15:02:40.711593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.300 [2024-12-11 15:02:40.711622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.300 qpair failed and we were unable to recover it. 00:25:58.300 [2024-12-11 15:02:40.711760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.300 [2024-12-11 15:02:40.711793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.300 qpair failed and we were unable to recover it. 00:25:58.300 [2024-12-11 15:02:40.711938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.300 [2024-12-11 15:02:40.711971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.300 qpair failed and we were unable to recover it. 00:25:58.300 [2024-12-11 15:02:40.712080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.300 [2024-12-11 15:02:40.712114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.300 qpair failed and we were unable to recover it. 00:25:58.300 [2024-12-11 15:02:40.712232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.300 [2024-12-11 15:02:40.712263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.300 qpair failed and we were unable to recover it. 00:25:58.300 [2024-12-11 15:02:40.712382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.300 [2024-12-11 15:02:40.712411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.300 qpair failed and we were unable to recover it. 00:25:58.300 [2024-12-11 15:02:40.712533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.300 [2024-12-11 15:02:40.712568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.300 qpair failed and we were unable to recover it. 00:25:58.300 [2024-12-11 15:02:40.712684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.300 [2024-12-11 15:02:40.712732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.300 qpair failed and we were unable to recover it. 00:25:58.300 [2024-12-11 15:02:40.712851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.300 [2024-12-11 15:02:40.712878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.300 qpair failed and we were unable to recover it. 00:25:58.300 [2024-12-11 15:02:40.712998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.300 [2024-12-11 15:02:40.713027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.300 qpair failed and we were unable to recover it. 00:25:58.300 [2024-12-11 15:02:40.713144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.300 [2024-12-11 15:02:40.713172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.300 qpair failed and we were unable to recover it. 00:25:58.300 [2024-12-11 15:02:40.713297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.300 [2024-12-11 15:02:40.713326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.300 qpair failed and we were unable to recover it. 00:25:58.300 [2024-12-11 15:02:40.713415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.300 [2024-12-11 15:02:40.713443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.300 qpair failed and we were unable to recover it. 00:25:58.300 [2024-12-11 15:02:40.713564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.300 [2024-12-11 15:02:40.713594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.300 qpair failed and we were unable to recover it. 00:25:58.300 [2024-12-11 15:02:40.713686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.300 [2024-12-11 15:02:40.713714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.300 qpair failed and we were unable to recover it. 00:25:58.300 [2024-12-11 15:02:40.713871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.300 [2024-12-11 15:02:40.713899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.300 qpair failed and we were unable to recover it. 00:25:58.300 [2024-12-11 15:02:40.714014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.300 [2024-12-11 15:02:40.714042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.300 qpair failed and we were unable to recover it. 00:25:58.300 [2024-12-11 15:02:40.714165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.300 [2024-12-11 15:02:40.714193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.300 qpair failed and we were unable to recover it. 00:25:58.300 [2024-12-11 15:02:40.714278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.300 [2024-12-11 15:02:40.714305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.300 qpair failed and we were unable to recover it. 00:25:58.300 [2024-12-11 15:02:40.714400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.300 [2024-12-11 15:02:40.714427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.300 qpair failed and we were unable to recover it. 00:25:58.300 [2024-12-11 15:02:40.714508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.300 [2024-12-11 15:02:40.714535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.300 qpair failed and we were unable to recover it. 00:25:58.300 [2024-12-11 15:02:40.714665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.300 [2024-12-11 15:02:40.714692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.300 qpair failed and we were unable to recover it. 00:25:58.300 [2024-12-11 15:02:40.714837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.300 [2024-12-11 15:02:40.714870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.300 qpair failed and we were unable to recover it. 00:25:58.300 [2024-12-11 15:02:40.715033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.300 [2024-12-11 15:02:40.715066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.300 qpair failed and we were unable to recover it. 00:25:58.300 [2024-12-11 15:02:40.715205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.300 [2024-12-11 15:02:40.715243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.300 qpair failed and we were unable to recover it. 00:25:58.300 [2024-12-11 15:02:40.715380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.300 [2024-12-11 15:02:40.715412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.300 qpair failed and we were unable to recover it. 00:25:58.300 [2024-12-11 15:02:40.715521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.300 [2024-12-11 15:02:40.715564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.300 qpair failed and we were unable to recover it. 00:25:58.300 [2024-12-11 15:02:40.715704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.300 [2024-12-11 15:02:40.715733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.301 qpair failed and we were unable to recover it. 00:25:58.301 [2024-12-11 15:02:40.715868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.301 [2024-12-11 15:02:40.715900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.301 qpair failed and we were unable to recover it. 00:25:58.301 [2024-12-11 15:02:40.716025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.301 [2024-12-11 15:02:40.716057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.301 qpair failed and we were unable to recover it. 00:25:58.301 [2024-12-11 15:02:40.716164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.301 [2024-12-11 15:02:40.716196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.301 qpair failed and we were unable to recover it. 00:25:58.301 [2024-12-11 15:02:40.716342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.301 [2024-12-11 15:02:40.716375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.301 qpair failed and we were unable to recover it. 00:25:58.301 [2024-12-11 15:02:40.716503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.301 [2024-12-11 15:02:40.716535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.301 qpair failed and we were unable to recover it. 00:25:58.301 [2024-12-11 15:02:40.716707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.301 [2024-12-11 15:02:40.716735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.301 qpair failed and we were unable to recover it. 00:25:58.301 [2024-12-11 15:02:40.716858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.301 [2024-12-11 15:02:40.716886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.301 qpair failed and we were unable to recover it. 00:25:58.301 [2024-12-11 15:02:40.717047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.301 [2024-12-11 15:02:40.717080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.301 qpair failed and we were unable to recover it. 00:25:58.301 [2024-12-11 15:02:40.717226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.301 [2024-12-11 15:02:40.717258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.301 qpair failed and we were unable to recover it. 00:25:58.301 [2024-12-11 15:02:40.717369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.301 [2024-12-11 15:02:40.717416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.301 qpair failed and we were unable to recover it. 00:25:58.301 [2024-12-11 15:02:40.717580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.301 [2024-12-11 15:02:40.717638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.301 qpair failed and we were unable to recover it. 00:25:58.301 [2024-12-11 15:02:40.717772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.301 [2024-12-11 15:02:40.717802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.301 qpair failed and we were unable to recover it. 00:25:58.301 [2024-12-11 15:02:40.717945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.301 [2024-12-11 15:02:40.717989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.301 qpair failed and we were unable to recover it. 00:25:58.301 [2024-12-11 15:02:40.718142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.301 [2024-12-11 15:02:40.718175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.301 qpair failed and we were unable to recover it. 00:25:58.301 [2024-12-11 15:02:40.718315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.301 [2024-12-11 15:02:40.718347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.301 qpair failed and we were unable to recover it. 00:25:58.301 [2024-12-11 15:02:40.718489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.301 [2024-12-11 15:02:40.718516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.301 qpair failed and we were unable to recover it. 00:25:58.301 [2024-12-11 15:02:40.718638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.301 [2024-12-11 15:02:40.718667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.301 qpair failed and we were unable to recover it. 00:25:58.301 [2024-12-11 15:02:40.718767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.301 [2024-12-11 15:02:40.718794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.301 qpair failed and we were unable to recover it. 00:25:58.301 [2024-12-11 15:02:40.718906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.301 [2024-12-11 15:02:40.718939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.301 qpair failed and we were unable to recover it. 00:25:58.301 [2024-12-11 15:02:40.719034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.301 [2024-12-11 15:02:40.719067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.301 qpair failed and we were unable to recover it. 00:25:58.301 [2024-12-11 15:02:40.719172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.301 [2024-12-11 15:02:40.719205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.301 qpair failed and we were unable to recover it. 00:25:58.301 [2024-12-11 15:02:40.719368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.301 [2024-12-11 15:02:40.719402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.301 qpair failed and we were unable to recover it. 00:25:58.301 [2024-12-11 15:02:40.719565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.301 [2024-12-11 15:02:40.719610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.301 qpair failed and we were unable to recover it. 00:25:58.301 [2024-12-11 15:02:40.719700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.301 [2024-12-11 15:02:40.719735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.301 qpair failed and we were unable to recover it. 00:25:58.301 [2024-12-11 15:02:40.719877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.301 [2024-12-11 15:02:40.719910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.301 qpair failed and we were unable to recover it. 00:25:58.301 [2024-12-11 15:02:40.720046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.301 [2024-12-11 15:02:40.720079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.301 qpair failed and we were unable to recover it. 00:25:58.301 [2024-12-11 15:02:40.720182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.301 [2024-12-11 15:02:40.720214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.301 qpair failed and we were unable to recover it. 00:25:58.301 [2024-12-11 15:02:40.720345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.301 [2024-12-11 15:02:40.720387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.301 qpair failed and we were unable to recover it. 00:25:58.301 [2024-12-11 15:02:40.720520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.301 [2024-12-11 15:02:40.720563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.301 qpair failed and we were unable to recover it. 00:25:58.301 [2024-12-11 15:02:40.720687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.301 [2024-12-11 15:02:40.720716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.301 qpair failed and we were unable to recover it. 00:25:58.301 [2024-12-11 15:02:40.720823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.301 [2024-12-11 15:02:40.720856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.301 qpair failed and we were unable to recover it. 00:25:58.301 [2024-12-11 15:02:40.721055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.301 [2024-12-11 15:02:40.721083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.301 qpair failed and we were unable to recover it. 00:25:58.301 [2024-12-11 15:02:40.721203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.301 [2024-12-11 15:02:40.721231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.301 qpair failed and we were unable to recover it. 00:25:58.301 [2024-12-11 15:02:40.721332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.301 [2024-12-11 15:02:40.721360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.301 qpair failed and we were unable to recover it. 00:25:58.301 [2024-12-11 15:02:40.721452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.301 [2024-12-11 15:02:40.721480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.301 qpair failed and we were unable to recover it. 00:25:58.301 [2024-12-11 15:02:40.721579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.301 [2024-12-11 15:02:40.721626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.301 qpair failed and we were unable to recover it. 00:25:58.301 [2024-12-11 15:02:40.721759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.301 [2024-12-11 15:02:40.721791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.301 qpair failed and we were unable to recover it. 00:25:58.301 [2024-12-11 15:02:40.721902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.301 [2024-12-11 15:02:40.721935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.301 qpair failed and we were unable to recover it. 00:25:58.301 [2024-12-11 15:02:40.722090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.301 [2024-12-11 15:02:40.722123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.302 qpair failed and we were unable to recover it. 00:25:58.302 [2024-12-11 15:02:40.722294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.302 [2024-12-11 15:02:40.722346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.302 qpair failed and we were unable to recover it. 00:25:58.302 [2024-12-11 15:02:40.722467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.302 [2024-12-11 15:02:40.722495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.302 qpair failed and we were unable to recover it. 00:25:58.302 [2024-12-11 15:02:40.722605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.302 [2024-12-11 15:02:40.722654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.302 qpair failed and we were unable to recover it. 00:25:58.302 [2024-12-11 15:02:40.722826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.302 [2024-12-11 15:02:40.722873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.302 qpair failed and we were unable to recover it. 00:25:58.302 [2024-12-11 15:02:40.723044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.302 [2024-12-11 15:02:40.723090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.302 qpair failed and we were unable to recover it. 00:25:58.302 [2024-12-11 15:02:40.723189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.302 [2024-12-11 15:02:40.723218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.302 qpair failed and we were unable to recover it. 00:25:58.302 [2024-12-11 15:02:40.723356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.302 [2024-12-11 15:02:40.723386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.302 qpair failed and we were unable to recover it. 00:25:58.302 [2024-12-11 15:02:40.723508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.302 [2024-12-11 15:02:40.723538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.302 qpair failed and we were unable to recover it. 00:25:58.302 [2024-12-11 15:02:40.723696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.302 [2024-12-11 15:02:40.723729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.302 qpair failed and we were unable to recover it. 00:25:58.302 [2024-12-11 15:02:40.723841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.302 [2024-12-11 15:02:40.723874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.302 qpair failed and we were unable to recover it. 00:25:58.302 [2024-12-11 15:02:40.724037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.302 [2024-12-11 15:02:40.724070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.302 qpair failed and we were unable to recover it. 00:25:58.302 [2024-12-11 15:02:40.724211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.302 [2024-12-11 15:02:40.724243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.302 qpair failed and we were unable to recover it. 00:25:58.302 [2024-12-11 15:02:40.724384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.302 [2024-12-11 15:02:40.724414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.302 qpair failed and we were unable to recover it. 00:25:58.302 [2024-12-11 15:02:40.724532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.302 [2024-12-11 15:02:40.724571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.302 qpair failed and we were unable to recover it. 00:25:58.302 [2024-12-11 15:02:40.724709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.302 [2024-12-11 15:02:40.724756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.302 qpair failed and we were unable to recover it. 00:25:58.302 [2024-12-11 15:02:40.724846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.302 [2024-12-11 15:02:40.724875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.302 qpair failed and we were unable to recover it. 00:25:58.302 [2024-12-11 15:02:40.724962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.302 [2024-12-11 15:02:40.724990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.302 qpair failed and we were unable to recover it. 00:25:58.302 [2024-12-11 15:02:40.725125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.302 [2024-12-11 15:02:40.725159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.302 qpair failed and we were unable to recover it. 00:25:58.302 [2024-12-11 15:02:40.725294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.302 [2024-12-11 15:02:40.725322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.302 qpair failed and we were unable to recover it. 00:25:58.302 [2024-12-11 15:02:40.725441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.302 [2024-12-11 15:02:40.725469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.302 qpair failed and we were unable to recover it. 00:25:58.302 [2024-12-11 15:02:40.725601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.302 [2024-12-11 15:02:40.725629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.302 qpair failed and we were unable to recover it. 00:25:58.302 [2024-12-11 15:02:40.725746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.302 [2024-12-11 15:02:40.725775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.302 qpair failed and we were unable to recover it. 00:25:58.302 [2024-12-11 15:02:40.725865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.302 [2024-12-11 15:02:40.725893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.302 qpair failed and we were unable to recover it. 00:25:58.302 [2024-12-11 15:02:40.725986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.302 [2024-12-11 15:02:40.726014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.302 qpair failed and we were unable to recover it. 00:25:58.302 [2024-12-11 15:02:40.726132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.302 [2024-12-11 15:02:40.726166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.302 qpair failed and we were unable to recover it. 00:25:58.302 [2024-12-11 15:02:40.726305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.302 [2024-12-11 15:02:40.726334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.302 qpair failed and we were unable to recover it. 00:25:58.302 [2024-12-11 15:02:40.726456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.302 [2024-12-11 15:02:40.726484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.302 qpair failed and we were unable to recover it. 00:25:58.302 [2024-12-11 15:02:40.726571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.302 [2024-12-11 15:02:40.726599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.302 qpair failed and we were unable to recover it. 00:25:58.302 [2024-12-11 15:02:40.726747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.302 [2024-12-11 15:02:40.726775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.302 qpair failed and we were unable to recover it. 00:25:58.302 [2024-12-11 15:02:40.726895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.302 [2024-12-11 15:02:40.726923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.302 qpair failed and we were unable to recover it. 00:25:58.302 [2024-12-11 15:02:40.727043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.302 [2024-12-11 15:02:40.727072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.302 qpair failed and we were unable to recover it. 00:25:58.302 [2024-12-11 15:02:40.727168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.302 [2024-12-11 15:02:40.727197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.302 qpair failed and we were unable to recover it. 00:25:58.302 [2024-12-11 15:02:40.727351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.302 [2024-12-11 15:02:40.727380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.302 qpair failed and we were unable to recover it. 00:25:58.302 [2024-12-11 15:02:40.727483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.302 [2024-12-11 15:02:40.727525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.302 qpair failed and we were unable to recover it. 00:25:58.302 [2024-12-11 15:02:40.727701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.302 [2024-12-11 15:02:40.727738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.302 qpair failed and we were unable to recover it. 00:25:58.302 [2024-12-11 15:02:40.727891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.302 [2024-12-11 15:02:40.727924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.302 qpair failed and we were unable to recover it. 00:25:58.302 [2024-12-11 15:02:40.728061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.302 [2024-12-11 15:02:40.728093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.302 qpair failed and we were unable to recover it. 00:25:58.302 [2024-12-11 15:02:40.728260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.302 [2024-12-11 15:02:40.728292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.302 qpair failed and we were unable to recover it. 00:25:58.302 [2024-12-11 15:02:40.728397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.302 [2024-12-11 15:02:40.728441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.303 qpair failed and we were unable to recover it. 00:25:58.303 [2024-12-11 15:02:40.728541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.303 [2024-12-11 15:02:40.728575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.303 qpair failed and we were unable to recover it. 00:25:58.303 [2024-12-11 15:02:40.728719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.303 [2024-12-11 15:02:40.728767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.303 qpair failed and we were unable to recover it. 00:25:58.303 [2024-12-11 15:02:40.728926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.303 [2024-12-11 15:02:40.728959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.303 qpair failed and we were unable to recover it. 00:25:58.303 [2024-12-11 15:02:40.729184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.303 [2024-12-11 15:02:40.729239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.303 qpair failed and we were unable to recover it. 00:25:58.303 [2024-12-11 15:02:40.729345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.303 [2024-12-11 15:02:40.729373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.303 qpair failed and we were unable to recover it. 00:25:58.303 [2024-12-11 15:02:40.729470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.303 [2024-12-11 15:02:40.729498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.303 qpair failed and we were unable to recover it. 00:25:58.303 [2024-12-11 15:02:40.729689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.303 [2024-12-11 15:02:40.729725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.303 qpair failed and we were unable to recover it. 00:25:58.303 [2024-12-11 15:02:40.729836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.303 [2024-12-11 15:02:40.729868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.303 qpair failed and we were unable to recover it. 00:25:58.303 [2024-12-11 15:02:40.730030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.303 [2024-12-11 15:02:40.730062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.303 qpair failed and we were unable to recover it. 00:25:58.303 [2024-12-11 15:02:40.730194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.303 [2024-12-11 15:02:40.730226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.303 qpair failed and we were unable to recover it. 00:25:58.303 [2024-12-11 15:02:40.730336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.303 [2024-12-11 15:02:40.730368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.303 qpair failed and we were unable to recover it. 00:25:58.303 [2024-12-11 15:02:40.730507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.303 [2024-12-11 15:02:40.730539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.303 qpair failed and we were unable to recover it. 00:25:58.303 [2024-12-11 15:02:40.730673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.303 [2024-12-11 15:02:40.730701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.303 qpair failed and we were unable to recover it. 00:25:58.303 [2024-12-11 15:02:40.730847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.303 [2024-12-11 15:02:40.730893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.303 qpair failed and we were unable to recover it. 00:25:58.303 [2024-12-11 15:02:40.731001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.303 [2024-12-11 15:02:40.731033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.303 qpair failed and we were unable to recover it. 00:25:58.303 [2024-12-11 15:02:40.731164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.303 [2024-12-11 15:02:40.731209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.303 qpair failed and we were unable to recover it. 00:25:58.303 [2024-12-11 15:02:40.731321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.303 [2024-12-11 15:02:40.731353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.303 qpair failed and we were unable to recover it. 00:25:58.303 [2024-12-11 15:02:40.731463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.303 [2024-12-11 15:02:40.731497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.303 qpair failed and we were unable to recover it. 00:25:58.303 [2024-12-11 15:02:40.731679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.303 [2024-12-11 15:02:40.731708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.303 qpair failed and we were unable to recover it. 00:25:58.303 [2024-12-11 15:02:40.731800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.303 [2024-12-11 15:02:40.731828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.303 qpair failed and we were unable to recover it. 00:25:58.303 [2024-12-11 15:02:40.731989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.303 [2024-12-11 15:02:40.732023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.303 qpair failed and we were unable to recover it. 00:25:58.303 [2024-12-11 15:02:40.732166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.303 [2024-12-11 15:02:40.732200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.303 qpair failed and we were unable to recover it. 00:25:58.303 [2024-12-11 15:02:40.732303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.303 [2024-12-11 15:02:40.732337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.303 qpair failed and we were unable to recover it. 00:25:58.303 [2024-12-11 15:02:40.732449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.303 [2024-12-11 15:02:40.732483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.303 qpair failed and we were unable to recover it. 00:25:58.303 [2024-12-11 15:02:40.732639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.303 [2024-12-11 15:02:40.732667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.303 qpair failed and we were unable to recover it. 00:25:58.303 [2024-12-11 15:02:40.732757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.303 [2024-12-11 15:02:40.732791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.303 qpair failed and we were unable to recover it. 00:25:58.303 [2024-12-11 15:02:40.732909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.303 [2024-12-11 15:02:40.732936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.303 qpair failed and we were unable to recover it. 00:25:58.303 [2024-12-11 15:02:40.733055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.303 [2024-12-11 15:02:40.733088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.303 qpair failed and we were unable to recover it. 00:25:58.303 [2024-12-11 15:02:40.733195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.303 [2024-12-11 15:02:40.733229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.303 qpair failed and we were unable to recover it. 00:25:58.303 [2024-12-11 15:02:40.733380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.303 [2024-12-11 15:02:40.733415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.303 qpair failed and we were unable to recover it. 00:25:58.303 [2024-12-11 15:02:40.733575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.303 [2024-12-11 15:02:40.733604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.303 qpair failed and we were unable to recover it. 00:25:58.303 [2024-12-11 15:02:40.733751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.303 [2024-12-11 15:02:40.733779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.303 qpair failed and we were unable to recover it. 00:25:58.304 [2024-12-11 15:02:40.733930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.304 [2024-12-11 15:02:40.733964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.304 qpair failed and we were unable to recover it. 00:25:58.304 [2024-12-11 15:02:40.734080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.304 [2024-12-11 15:02:40.734125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.304 qpair failed and we were unable to recover it. 00:25:58.304 [2024-12-11 15:02:40.734269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.304 [2024-12-11 15:02:40.734303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.304 qpair failed and we were unable to recover it. 00:25:58.304 [2024-12-11 15:02:40.734464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.304 [2024-12-11 15:02:40.734492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.304 qpair failed and we were unable to recover it. 00:25:58.304 [2024-12-11 15:02:40.734583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.304 [2024-12-11 15:02:40.734612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.304 qpair failed and we were unable to recover it. 00:25:58.304 [2024-12-11 15:02:40.734729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.304 [2024-12-11 15:02:40.734757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.304 qpair failed and we were unable to recover it. 00:25:58.304 [2024-12-11 15:02:40.734879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.304 [2024-12-11 15:02:40.734907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.304 qpair failed and we were unable to recover it. 00:25:58.304 [2024-12-11 15:02:40.735058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.304 [2024-12-11 15:02:40.735093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.304 qpair failed and we were unable to recover it. 00:25:58.304 [2024-12-11 15:02:40.735202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.304 [2024-12-11 15:02:40.735236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.304 qpair failed and we were unable to recover it. 00:25:58.304 [2024-12-11 15:02:40.735380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.304 [2024-12-11 15:02:40.735413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.304 qpair failed and we were unable to recover it. 00:25:58.304 [2024-12-11 15:02:40.735553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.304 [2024-12-11 15:02:40.735600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.304 qpair failed and we were unable to recover it. 00:25:58.304 [2024-12-11 15:02:40.735696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.304 [2024-12-11 15:02:40.735723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.304 qpair failed and we were unable to recover it. 00:25:58.304 [2024-12-11 15:02:40.735880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.304 [2024-12-11 15:02:40.735914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.304 qpair failed and we were unable to recover it. 00:25:58.304 [2024-12-11 15:02:40.736112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.304 [2024-12-11 15:02:40.736146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.304 qpair failed and we were unable to recover it. 00:25:58.304 [2024-12-11 15:02:40.736257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.304 [2024-12-11 15:02:40.736291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.304 qpair failed and we were unable to recover it. 00:25:58.304 [2024-12-11 15:02:40.736428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.304 [2024-12-11 15:02:40.736461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.304 qpair failed and we were unable to recover it. 00:25:58.304 [2024-12-11 15:02:40.736618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.304 [2024-12-11 15:02:40.736646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.304 qpair failed and we were unable to recover it. 00:25:58.304 [2024-12-11 15:02:40.736758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.304 [2024-12-11 15:02:40.736785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.304 qpair failed and we were unable to recover it. 00:25:58.304 [2024-12-11 15:02:40.736928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.304 [2024-12-11 15:02:40.736960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.304 qpair failed and we were unable to recover it. 00:25:58.304 [2024-12-11 15:02:40.737083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.304 [2024-12-11 15:02:40.737130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.304 qpair failed and we were unable to recover it. 00:25:58.304 [2024-12-11 15:02:40.737278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.304 [2024-12-11 15:02:40.737311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.304 qpair failed and we were unable to recover it. 00:25:58.304 [2024-12-11 15:02:40.737414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.304 [2024-12-11 15:02:40.737446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.304 qpair failed and we were unable to recover it. 00:25:58.304 [2024-12-11 15:02:40.737597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.304 [2024-12-11 15:02:40.737626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.304 qpair failed and we were unable to recover it. 00:25:58.304 [2024-12-11 15:02:40.737720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.304 [2024-12-11 15:02:40.737748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.304 qpair failed and we were unable to recover it. 00:25:58.304 [2024-12-11 15:02:40.737894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.304 [2024-12-11 15:02:40.737926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.304 qpair failed and we were unable to recover it. 00:25:58.304 [2024-12-11 15:02:40.738089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.304 [2024-12-11 15:02:40.738122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.304 qpair failed and we were unable to recover it. 00:25:58.304 [2024-12-11 15:02:40.738259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.304 [2024-12-11 15:02:40.738291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.304 qpair failed and we were unable to recover it. 00:25:58.304 [2024-12-11 15:02:40.738475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.304 [2024-12-11 15:02:40.738503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.304 qpair failed and we were unable to recover it. 00:25:58.304 [2024-12-11 15:02:40.738608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.304 [2024-12-11 15:02:40.738636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.304 qpair failed and we were unable to recover it. 00:25:58.304 [2024-12-11 15:02:40.738793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.304 [2024-12-11 15:02:40.738842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.304 qpair failed and we were unable to recover it. 00:25:58.304 [2024-12-11 15:02:40.738993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.304 [2024-12-11 15:02:40.739027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.304 qpair failed and we were unable to recover it. 00:25:58.304 [2024-12-11 15:02:40.739148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.304 [2024-12-11 15:02:40.739194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.304 qpair failed and we were unable to recover it. 00:25:58.304 [2024-12-11 15:02:40.739365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.304 [2024-12-11 15:02:40.739400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.304 qpair failed and we were unable to recover it. 00:25:58.304 [2024-12-11 15:02:40.739543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.304 [2024-12-11 15:02:40.739605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.304 qpair failed and we were unable to recover it. 00:25:58.304 [2024-12-11 15:02:40.739727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.304 [2024-12-11 15:02:40.739755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.304 qpair failed and we were unable to recover it. 00:25:58.304 [2024-12-11 15:02:40.739876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.304 [2024-12-11 15:02:40.739903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.304 qpair failed and we were unable to recover it. 00:25:58.304 [2024-12-11 15:02:40.740001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.304 [2024-12-11 15:02:40.740030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.304 qpair failed and we were unable to recover it. 00:25:58.304 [2024-12-11 15:02:40.740183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.304 [2024-12-11 15:02:40.740217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.304 qpair failed and we were unable to recover it. 00:25:58.305 [2024-12-11 15:02:40.740334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.305 [2024-12-11 15:02:40.740362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.305 qpair failed and we were unable to recover it. 00:25:58.305 [2024-12-11 15:02:40.740518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.305 [2024-12-11 15:02:40.740564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.305 qpair failed and we were unable to recover it. 00:25:58.305 [2024-12-11 15:02:40.740678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.305 [2024-12-11 15:02:40.740707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.305 qpair failed and we were unable to recover it. 00:25:58.305 [2024-12-11 15:02:40.740800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.305 [2024-12-11 15:02:40.740827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.305 qpair failed and we were unable to recover it. 00:25:58.305 [2024-12-11 15:02:40.740965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.305 [2024-12-11 15:02:40.740999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.305 qpair failed and we were unable to recover it. 00:25:58.305 [2024-12-11 15:02:40.741167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.305 [2024-12-11 15:02:40.741201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.305 qpair failed and we were unable to recover it. 00:25:58.305 [2024-12-11 15:02:40.741372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.305 [2024-12-11 15:02:40.741406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.305 qpair failed and we were unable to recover it. 00:25:58.305 [2024-12-11 15:02:40.741536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.305 [2024-12-11 15:02:40.741573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.305 qpair failed and we were unable to recover it. 00:25:58.305 [2024-12-11 15:02:40.741712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.305 [2024-12-11 15:02:40.741746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.305 qpair failed and we were unable to recover it. 00:25:58.305 [2024-12-11 15:02:40.741927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.305 [2024-12-11 15:02:40.741961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.305 qpair failed and we were unable to recover it. 00:25:58.305 [2024-12-11 15:02:40.742101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.305 [2024-12-11 15:02:40.742136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.305 qpair failed and we were unable to recover it. 00:25:58.305 [2024-12-11 15:02:40.742239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.305 [2024-12-11 15:02:40.742273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.305 qpair failed and we were unable to recover it. 00:25:58.305 [2024-12-11 15:02:40.742445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.305 [2024-12-11 15:02:40.742479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.305 qpair failed and we were unable to recover it. 00:25:58.305 [2024-12-11 15:02:40.742608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.305 [2024-12-11 15:02:40.742638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.305 qpair failed and we were unable to recover it. 00:25:58.305 [2024-12-11 15:02:40.742737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.305 [2024-12-11 15:02:40.742765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.305 qpair failed and we were unable to recover it. 00:25:58.305 [2024-12-11 15:02:40.742914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.305 [2024-12-11 15:02:40.742961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.305 qpair failed and we were unable to recover it. 00:25:58.305 [2024-12-11 15:02:40.743111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.305 [2024-12-11 15:02:40.743145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.305 qpair failed and we were unable to recover it. 00:25:58.305 [2024-12-11 15:02:40.743264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.305 [2024-12-11 15:02:40.743309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.305 qpair failed and we were unable to recover it. 00:25:58.305 [2024-12-11 15:02:40.743453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.305 [2024-12-11 15:02:40.743487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.305 qpair failed and we were unable to recover it. 00:25:58.305 [2024-12-11 15:02:40.743642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.305 [2024-12-11 15:02:40.743671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.305 qpair failed and we were unable to recover it. 00:25:58.305 [2024-12-11 15:02:40.743792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.305 [2024-12-11 15:02:40.743819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.305 qpair failed and we were unable to recover it. 00:25:58.305 [2024-12-11 15:02:40.743934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.305 [2024-12-11 15:02:40.743962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.305 qpair failed and we were unable to recover it. 00:25:58.305 [2024-12-11 15:02:40.744157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.305 [2024-12-11 15:02:40.744192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.305 qpair failed and we were unable to recover it. 00:25:58.305 [2024-12-11 15:02:40.744308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.305 [2024-12-11 15:02:40.744358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.305 qpair failed and we were unable to recover it. 00:25:58.305 [2024-12-11 15:02:40.744541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.305 [2024-12-11 15:02:40.744575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.305 qpair failed and we were unable to recover it. 00:25:58.305 [2024-12-11 15:02:40.744665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.305 [2024-12-11 15:02:40.744693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.305 qpair failed and we were unable to recover it. 00:25:58.305 [2024-12-11 15:02:40.744814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.305 [2024-12-11 15:02:40.744842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.305 qpair failed and we were unable to recover it. 00:25:58.305 [2024-12-11 15:02:40.744937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.305 [2024-12-11 15:02:40.744964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.305 qpair failed and we were unable to recover it. 00:25:58.305 [2024-12-11 15:02:40.745093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.305 [2024-12-11 15:02:40.745127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.305 qpair failed and we were unable to recover it. 00:25:58.305 [2024-12-11 15:02:40.745290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.305 [2024-12-11 15:02:40.745323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.305 qpair failed and we were unable to recover it. 00:25:58.305 [2024-12-11 15:02:40.745417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.305 [2024-12-11 15:02:40.745451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.305 qpair failed and we were unable to recover it. 00:25:58.305 [2024-12-11 15:02:40.745557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.305 [2024-12-11 15:02:40.745601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.305 qpair failed and we were unable to recover it. 00:25:58.305 [2024-12-11 15:02:40.745720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.305 [2024-12-11 15:02:40.745748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.305 qpair failed and we were unable to recover it. 00:25:58.305 [2024-12-11 15:02:40.745874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.305 [2024-12-11 15:02:40.745924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.305 qpair failed and we were unable to recover it. 00:25:58.305 [2024-12-11 15:02:40.746062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.305 [2024-12-11 15:02:40.746095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.305 qpair failed and we were unable to recover it. 00:25:58.305 [2024-12-11 15:02:40.746217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.305 [2024-12-11 15:02:40.746270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.305 qpair failed and we were unable to recover it. 00:25:58.305 [2024-12-11 15:02:40.746409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.305 [2024-12-11 15:02:40.746443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.305 qpair failed and we were unable to recover it. 00:25:58.305 [2024-12-11 15:02:40.746560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.305 [2024-12-11 15:02:40.746606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.305 qpair failed and we were unable to recover it. 00:25:58.305 [2024-12-11 15:02:40.746729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.305 [2024-12-11 15:02:40.746757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.305 qpair failed and we were unable to recover it. 00:25:58.306 [2024-12-11 15:02:40.746860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.306 [2024-12-11 15:02:40.746888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.306 qpair failed and we were unable to recover it. 00:25:58.306 [2024-12-11 15:02:40.746979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.306 [2024-12-11 15:02:40.747007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.306 qpair failed and we were unable to recover it. 00:25:58.306 [2024-12-11 15:02:40.747159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.306 [2024-12-11 15:02:40.747193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.306 qpair failed and we were unable to recover it. 00:25:58.306 [2024-12-11 15:02:40.747321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.306 [2024-12-11 15:02:40.747371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.306 qpair failed and we were unable to recover it. 00:25:58.306 [2024-12-11 15:02:40.747499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.306 [2024-12-11 15:02:40.747535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.306 qpair failed and we were unable to recover it. 00:25:58.306 [2024-12-11 15:02:40.747696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.306 [2024-12-11 15:02:40.747725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.306 qpair failed and we were unable to recover it. 00:25:58.306 [2024-12-11 15:02:40.747847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.306 [2024-12-11 15:02:40.747875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.306 qpair failed and we were unable to recover it. 00:25:58.306 [2024-12-11 15:02:40.747970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.306 [2024-12-11 15:02:40.747998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.306 qpair failed and we were unable to recover it. 00:25:58.306 [2024-12-11 15:02:40.748162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.306 [2024-12-11 15:02:40.748204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.306 qpair failed and we were unable to recover it. 00:25:58.306 [2024-12-11 15:02:40.748320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.306 [2024-12-11 15:02:40.748357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.306 qpair failed and we were unable to recover it. 00:25:58.306 [2024-12-11 15:02:40.748523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.306 [2024-12-11 15:02:40.748564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.306 qpair failed and we were unable to recover it. 00:25:58.306 [2024-12-11 15:02:40.748661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.306 [2024-12-11 15:02:40.748689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.306 qpair failed and we were unable to recover it. 00:25:58.306 [2024-12-11 15:02:40.748840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.306 [2024-12-11 15:02:40.748868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.306 qpair failed and we were unable to recover it. 00:25:58.306 [2024-12-11 15:02:40.748984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.306 [2024-12-11 15:02:40.749034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.306 qpair failed and we were unable to recover it. 00:25:58.306 [2024-12-11 15:02:40.749249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.306 [2024-12-11 15:02:40.749284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.306 qpair failed and we were unable to recover it. 00:25:58.306 [2024-12-11 15:02:40.749446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.306 [2024-12-11 15:02:40.749476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.306 qpair failed and we were unable to recover it. 00:25:58.306 [2024-12-11 15:02:40.749598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.306 [2024-12-11 15:02:40.749627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.306 qpair failed and we were unable to recover it. 00:25:58.306 [2024-12-11 15:02:40.749749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.306 [2024-12-11 15:02:40.749777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.306 qpair failed and we were unable to recover it. 00:25:58.306 [2024-12-11 15:02:40.749902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.306 [2024-12-11 15:02:40.749949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.306 qpair failed and we were unable to recover it. 00:25:58.306 [2024-12-11 15:02:40.750072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.306 [2024-12-11 15:02:40.750123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.306 qpair failed and we were unable to recover it. 00:25:58.306 [2024-12-11 15:02:40.750332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.306 [2024-12-11 15:02:40.750368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.306 qpair failed and we were unable to recover it. 00:25:58.306 [2024-12-11 15:02:40.750499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.306 [2024-12-11 15:02:40.750527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.306 qpair failed and we were unable to recover it. 00:25:58.306 [2024-12-11 15:02:40.750684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.306 [2024-12-11 15:02:40.750712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.306 qpair failed and we were unable to recover it. 00:25:58.306 [2024-12-11 15:02:40.750856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.306 [2024-12-11 15:02:40.750892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.306 qpair failed and we were unable to recover it. 00:25:58.306 [2024-12-11 15:02:40.751082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.306 [2024-12-11 15:02:40.751115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.306 qpair failed and we were unable to recover it. 00:25:58.306 [2024-12-11 15:02:40.751256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.306 [2024-12-11 15:02:40.751290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.306 qpair failed and we were unable to recover it. 00:25:58.306 [2024-12-11 15:02:40.751400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.306 [2024-12-11 15:02:40.751434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.306 qpair failed and we were unable to recover it. 00:25:58.306 [2024-12-11 15:02:40.751578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.306 [2024-12-11 15:02:40.751606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.306 qpair failed and we were unable to recover it. 00:25:58.306 [2024-12-11 15:02:40.751725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.306 [2024-12-11 15:02:40.751752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.306 qpair failed and we were unable to recover it. 00:25:58.306 [2024-12-11 15:02:40.751851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.306 [2024-12-11 15:02:40.751878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.306 qpair failed and we were unable to recover it. 00:25:58.306 [2024-12-11 15:02:40.752013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.306 [2024-12-11 15:02:40.752048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.306 qpair failed and we were unable to recover it. 00:25:58.306 [2024-12-11 15:02:40.752182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.306 [2024-12-11 15:02:40.752215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.306 qpair failed and we were unable to recover it. 00:25:58.306 [2024-12-11 15:02:40.752357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.306 [2024-12-11 15:02:40.752390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.306 qpair failed and we were unable to recover it. 00:25:58.306 [2024-12-11 15:02:40.752519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.306 [2024-12-11 15:02:40.752568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.306 qpair failed and we were unable to recover it. 00:25:58.306 [2024-12-11 15:02:40.752671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.306 [2024-12-11 15:02:40.752701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.306 qpair failed and we were unable to recover it. 00:25:58.306 [2024-12-11 15:02:40.752835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.306 [2024-12-11 15:02:40.752870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.306 qpair failed and we were unable to recover it. 00:25:58.306 [2024-12-11 15:02:40.753027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.306 [2024-12-11 15:02:40.753073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.306 qpair failed and we were unable to recover it. 00:25:58.306 [2024-12-11 15:02:40.753170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.306 [2024-12-11 15:02:40.753200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.306 qpair failed and we were unable to recover it. 00:25:58.306 [2024-12-11 15:02:40.753330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.306 [2024-12-11 15:02:40.753360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.306 qpair failed and we were unable to recover it. 00:25:58.306 [2024-12-11 15:02:40.753459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.307 [2024-12-11 15:02:40.753487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.307 qpair failed and we were unable to recover it. 00:25:58.307 [2024-12-11 15:02:40.753607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.307 [2024-12-11 15:02:40.753636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.307 qpair failed and we were unable to recover it. 00:25:58.307 [2024-12-11 15:02:40.753754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.307 [2024-12-11 15:02:40.753787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.307 qpair failed and we were unable to recover it. 00:25:58.307 [2024-12-11 15:02:40.753893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.307 [2024-12-11 15:02:40.753926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.307 qpair failed and we were unable to recover it. 00:25:58.307 [2024-12-11 15:02:40.754063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.307 [2024-12-11 15:02:40.754096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.307 qpair failed and we were unable to recover it. 00:25:58.307 [2024-12-11 15:02:40.754242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.307 [2024-12-11 15:02:40.754275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.307 qpair failed and we were unable to recover it. 00:25:58.307 [2024-12-11 15:02:40.754379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.307 [2024-12-11 15:02:40.754412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.307 qpair failed and we were unable to recover it. 00:25:58.307 [2024-12-11 15:02:40.754521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.307 [2024-12-11 15:02:40.754561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.307 qpair failed and we were unable to recover it. 00:25:58.307 [2024-12-11 15:02:40.754688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.307 [2024-12-11 15:02:40.754721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.307 qpair failed and we were unable to recover it. 00:25:58.307 [2024-12-11 15:02:40.754818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.307 [2024-12-11 15:02:40.754851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.307 qpair failed and we were unable to recover it. 00:25:58.307 [2024-12-11 15:02:40.754979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.307 [2024-12-11 15:02:40.755012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.307 qpair failed and we were unable to recover it. 00:25:58.307 [2024-12-11 15:02:40.755227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.307 [2024-12-11 15:02:40.755276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.307 qpair failed and we were unable to recover it. 00:25:58.307 [2024-12-11 15:02:40.755395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.307 [2024-12-11 15:02:40.755423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.307 qpair failed and we were unable to recover it. 00:25:58.307 [2024-12-11 15:02:40.755515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.307 [2024-12-11 15:02:40.755553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.307 qpair failed and we were unable to recover it. 00:25:58.307 [2024-12-11 15:02:40.755644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.307 [2024-12-11 15:02:40.755672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.307 qpair failed and we were unable to recover it. 00:25:58.307 [2024-12-11 15:02:40.755777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.307 [2024-12-11 15:02:40.755810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.307 qpair failed and we were unable to recover it. 00:25:58.307 [2024-12-11 15:02:40.755992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.307 [2024-12-11 15:02:40.756040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.307 qpair failed and we were unable to recover it. 00:25:58.307 [2024-12-11 15:02:40.756165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.307 [2024-12-11 15:02:40.756194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.307 qpair failed and we were unable to recover it. 00:25:58.307 [2024-12-11 15:02:40.756322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.307 [2024-12-11 15:02:40.756349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.307 qpair failed and we were unable to recover it. 00:25:58.307 [2024-12-11 15:02:40.756465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.307 [2024-12-11 15:02:40.756493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.307 qpair failed and we were unable to recover it. 00:25:58.307 [2024-12-11 15:02:40.756593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.307 [2024-12-11 15:02:40.756620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.307 qpair failed and we were unable to recover it. 00:25:58.307 [2024-12-11 15:02:40.756708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.307 [2024-12-11 15:02:40.756735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.307 qpair failed and we were unable to recover it. 00:25:58.307 [2024-12-11 15:02:40.756831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.307 [2024-12-11 15:02:40.756858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.307 qpair failed and we were unable to recover it. 00:25:58.307 [2024-12-11 15:02:40.756966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.307 [2024-12-11 15:02:40.756998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.307 qpair failed and we were unable to recover it. 00:25:58.307 [2024-12-11 15:02:40.757164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.307 [2024-12-11 15:02:40.757197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.307 qpair failed and we were unable to recover it. 00:25:58.307 [2024-12-11 15:02:40.757377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.307 [2024-12-11 15:02:40.757410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.307 qpair failed and we were unable to recover it. 00:25:58.307 [2024-12-11 15:02:40.757570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.307 [2024-12-11 15:02:40.757600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.307 qpair failed and we were unable to recover it. 00:25:58.307 [2024-12-11 15:02:40.757747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.307 [2024-12-11 15:02:40.757795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.307 qpair failed and we were unable to recover it. 00:25:58.307 [2024-12-11 15:02:40.757969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.307 [2024-12-11 15:02:40.758020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.307 qpair failed and we were unable to recover it. 00:25:58.307 [2024-12-11 15:02:40.758135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.307 [2024-12-11 15:02:40.758187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.307 qpair failed and we were unable to recover it. 00:25:58.307 [2024-12-11 15:02:40.758322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.307 [2024-12-11 15:02:40.758364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.307 qpair failed and we were unable to recover it. 00:25:58.307 [2024-12-11 15:02:40.758486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.307 [2024-12-11 15:02:40.758515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.307 qpair failed and we were unable to recover it. 00:25:58.307 [2024-12-11 15:02:40.758623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.307 [2024-12-11 15:02:40.758653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.307 qpair failed and we were unable to recover it. 00:25:58.307 [2024-12-11 15:02:40.758810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.307 [2024-12-11 15:02:40.758844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.307 qpair failed and we were unable to recover it. 00:25:58.307 [2024-12-11 15:02:40.758989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.307 [2024-12-11 15:02:40.759023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.307 qpair failed and we were unable to recover it. 00:25:58.307 [2024-12-11 15:02:40.759141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.307 [2024-12-11 15:02:40.759175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.307 qpair failed and we were unable to recover it. 00:25:58.307 [2024-12-11 15:02:40.759289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.307 [2024-12-11 15:02:40.759325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.307 qpair failed and we were unable to recover it. 00:25:58.307 [2024-12-11 15:02:40.759471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.307 [2024-12-11 15:02:40.759505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.307 qpair failed and we were unable to recover it. 00:25:58.307 [2024-12-11 15:02:40.759668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.307 [2024-12-11 15:02:40.759698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.307 qpair failed and we were unable to recover it. 00:25:58.308 [2024-12-11 15:02:40.759787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.308 [2024-12-11 15:02:40.759816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.308 qpair failed and we were unable to recover it. 00:25:58.308 [2024-12-11 15:02:40.759935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.308 [2024-12-11 15:02:40.759984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.308 qpair failed and we were unable to recover it. 00:25:58.308 [2024-12-11 15:02:40.760164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.308 [2024-12-11 15:02:40.760212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.308 qpair failed and we were unable to recover it. 00:25:58.308 [2024-12-11 15:02:40.760330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.308 [2024-12-11 15:02:40.760358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.308 qpair failed and we were unable to recover it. 00:25:58.308 [2024-12-11 15:02:40.760505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.308 [2024-12-11 15:02:40.760533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.308 qpair failed and we were unable to recover it. 00:25:58.308 [2024-12-11 15:02:40.760661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.308 [2024-12-11 15:02:40.760699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.308 qpair failed and we were unable to recover it. 00:25:58.308 [2024-12-11 15:02:40.760871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.308 [2024-12-11 15:02:40.760904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.308 qpair failed and we were unable to recover it. 00:25:58.308 [2024-12-11 15:02:40.761022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.308 [2024-12-11 15:02:40.761056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.308 qpair failed and we were unable to recover it. 00:25:58.308 [2024-12-11 15:02:40.761240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.308 [2024-12-11 15:02:40.761275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.308 qpair failed and we were unable to recover it. 00:25:58.308 [2024-12-11 15:02:40.761386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.308 [2024-12-11 15:02:40.761421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.308 qpair failed and we were unable to recover it. 00:25:58.308 [2024-12-11 15:02:40.761612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.308 [2024-12-11 15:02:40.761640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.308 qpair failed and we were unable to recover it. 00:25:58.308 [2024-12-11 15:02:40.761762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.308 [2024-12-11 15:02:40.761789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.308 qpair failed and we were unable to recover it. 00:25:58.308 [2024-12-11 15:02:40.761912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.308 [2024-12-11 15:02:40.761943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.308 qpair failed and we were unable to recover it. 00:25:58.308 [2024-12-11 15:02:40.762028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.308 [2024-12-11 15:02:40.762074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.308 qpair failed and we were unable to recover it. 00:25:58.308 [2024-12-11 15:02:40.762270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.308 [2024-12-11 15:02:40.762303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.308 qpair failed and we were unable to recover it. 00:25:58.308 [2024-12-11 15:02:40.762410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.308 [2024-12-11 15:02:40.762444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.308 qpair failed and we were unable to recover it. 00:25:58.308 [2024-12-11 15:02:40.762624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.308 [2024-12-11 15:02:40.762653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.308 qpair failed and we were unable to recover it. 00:25:58.308 [2024-12-11 15:02:40.762760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.308 [2024-12-11 15:02:40.762795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.308 qpair failed and we were unable to recover it. 00:25:58.308 [2024-12-11 15:02:40.762967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.308 [2024-12-11 15:02:40.763002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.308 qpair failed and we were unable to recover it. 00:25:58.308 [2024-12-11 15:02:40.763124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.308 [2024-12-11 15:02:40.763160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.308 qpair failed and we were unable to recover it. 00:25:58.308 [2024-12-11 15:02:40.763303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.308 [2024-12-11 15:02:40.763337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.308 qpair failed and we were unable to recover it. 00:25:58.308 [2024-12-11 15:02:40.763469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.308 [2024-12-11 15:02:40.763503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.308 qpair failed and we were unable to recover it. 00:25:58.308 [2024-12-11 15:02:40.763632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.308 [2024-12-11 15:02:40.763660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.308 qpair failed and we were unable to recover it. 00:25:58.308 [2024-12-11 15:02:40.763800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.308 [2024-12-11 15:02:40.763833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.308 qpair failed and we were unable to recover it. 00:25:58.308 [2024-12-11 15:02:40.763975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.308 [2024-12-11 15:02:40.764009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.308 qpair failed and we were unable to recover it. 00:25:58.308 [2024-12-11 15:02:40.764155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.308 [2024-12-11 15:02:40.764189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.308 qpair failed and we were unable to recover it. 00:25:58.308 [2024-12-11 15:02:40.764390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.308 [2024-12-11 15:02:40.764424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.308 qpair failed and we were unable to recover it. 00:25:58.308 [2024-12-11 15:02:40.764563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.308 [2024-12-11 15:02:40.764625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.308 qpair failed and we were unable to recover it. 00:25:58.308 [2024-12-11 15:02:40.764776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.308 [2024-12-11 15:02:40.764804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.308 qpair failed and we were unable to recover it. 00:25:58.308 [2024-12-11 15:02:40.764979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.308 [2024-12-11 15:02:40.765012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.308 qpair failed and we were unable to recover it. 00:25:58.308 [2024-12-11 15:02:40.765158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.308 [2024-12-11 15:02:40.765191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.308 qpair failed and we were unable to recover it. 00:25:58.308 [2024-12-11 15:02:40.765325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.308 [2024-12-11 15:02:40.765359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.308 qpair failed and we were unable to recover it. 00:25:58.308 [2024-12-11 15:02:40.765498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.308 [2024-12-11 15:02:40.765526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.308 qpair failed and we were unable to recover it. 00:25:58.308 [2024-12-11 15:02:40.765655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.308 [2024-12-11 15:02:40.765683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.308 qpair failed and we were unable to recover it. 00:25:58.308 [2024-12-11 15:02:40.765806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.308 [2024-12-11 15:02:40.765854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.308 qpair failed and we were unable to recover it. 00:25:58.309 [2024-12-11 15:02:40.765968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.309 [2024-12-11 15:02:40.766002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.309 qpair failed and we were unable to recover it. 00:25:58.309 [2024-12-11 15:02:40.766152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.309 [2024-12-11 15:02:40.766186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.309 qpair failed and we were unable to recover it. 00:25:58.309 [2024-12-11 15:02:40.766326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.309 [2024-12-11 15:02:40.766359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.309 qpair failed and we were unable to recover it. 00:25:58.309 [2024-12-11 15:02:40.766494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.309 [2024-12-11 15:02:40.766522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.309 qpair failed and we were unable to recover it. 00:25:58.309 [2024-12-11 15:02:40.766683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.309 [2024-12-11 15:02:40.766718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.309 qpair failed and we were unable to recover it. 00:25:58.309 [2024-12-11 15:02:40.766814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.309 [2024-12-11 15:02:40.766843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.309 qpair failed and we were unable to recover it. 00:25:58.309 [2024-12-11 15:02:40.766959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.309 [2024-12-11 15:02:40.766992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.309 qpair failed and we were unable to recover it. 00:25:58.309 [2024-12-11 15:02:40.767108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.309 [2024-12-11 15:02:40.767142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.309 qpair failed and we were unable to recover it. 00:25:58.309 [2024-12-11 15:02:40.767281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.309 [2024-12-11 15:02:40.767314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.309 qpair failed and we were unable to recover it. 00:25:58.309 [2024-12-11 15:02:40.767423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.309 [2024-12-11 15:02:40.767451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.309 qpair failed and we were unable to recover it. 00:25:58.309 [2024-12-11 15:02:40.767568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.309 [2024-12-11 15:02:40.767600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.309 qpair failed and we were unable to recover it. 00:25:58.309 [2024-12-11 15:02:40.767752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.309 [2024-12-11 15:02:40.767789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.309 qpair failed and we were unable to recover it. 00:25:58.309 [2024-12-11 15:02:40.767952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.309 [2024-12-11 15:02:40.768010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.309 qpair failed and we were unable to recover it. 00:25:58.309 [2024-12-11 15:02:40.768160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.309 [2024-12-11 15:02:40.768197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.309 qpair failed and we were unable to recover it. 00:25:58.309 [2024-12-11 15:02:40.768353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.309 [2024-12-11 15:02:40.768393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.309 qpair failed and we were unable to recover it. 00:25:58.309 [2024-12-11 15:02:40.768581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.309 [2024-12-11 15:02:40.768629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.309 qpair failed and we were unable to recover it. 00:25:58.309 [2024-12-11 15:02:40.768773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.309 [2024-12-11 15:02:40.768801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.309 qpair failed and we were unable to recover it. 00:25:58.309 [2024-12-11 15:02:40.768976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.309 [2024-12-11 15:02:40.769010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.309 qpair failed and we were unable to recover it. 00:25:58.309 [2024-12-11 15:02:40.769220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.309 [2024-12-11 15:02:40.769253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.309 qpair failed and we were unable to recover it. 00:25:58.309 [2024-12-11 15:02:40.769368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.309 [2024-12-11 15:02:40.769402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.309 qpair failed and we were unable to recover it. 00:25:58.309 [2024-12-11 15:02:40.769552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.309 [2024-12-11 15:02:40.769601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.309 qpair failed and we were unable to recover it. 00:25:58.309 [2024-12-11 15:02:40.769697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.309 [2024-12-11 15:02:40.769725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.309 qpair failed and we were unable to recover it. 00:25:58.309 [2024-12-11 15:02:40.769847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.309 [2024-12-11 15:02:40.769875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.309 qpair failed and we were unable to recover it. 00:25:58.309 [2024-12-11 15:02:40.769988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.309 [2024-12-11 15:02:40.770023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.309 qpair failed and we were unable to recover it. 00:25:58.309 [2024-12-11 15:02:40.770189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.309 [2024-12-11 15:02:40.770222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.309 qpair failed and we were unable to recover it. 00:25:58.309 [2024-12-11 15:02:40.770365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.309 [2024-12-11 15:02:40.770399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.309 qpair failed and we were unable to recover it. 00:25:58.309 [2024-12-11 15:02:40.770571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.309 [2024-12-11 15:02:40.770600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.309 qpair failed and we were unable to recover it. 00:25:58.309 [2024-12-11 15:02:40.770720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.309 [2024-12-11 15:02:40.770748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.309 qpair failed and we were unable to recover it. 00:25:58.309 [2024-12-11 15:02:40.770866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.309 [2024-12-11 15:02:40.770911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.309 qpair failed and we were unable to recover it. 00:25:58.309 [2024-12-11 15:02:40.771024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.309 [2024-12-11 15:02:40.771057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.309 qpair failed and we were unable to recover it. 00:25:58.309 [2024-12-11 15:02:40.771216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.309 [2024-12-11 15:02:40.771251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.309 qpair failed and we were unable to recover it. 00:25:58.309 [2024-12-11 15:02:40.771435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.309 [2024-12-11 15:02:40.771471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.309 qpair failed and we were unable to recover it. 00:25:58.309 [2024-12-11 15:02:40.771626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.309 [2024-12-11 15:02:40.771654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.309 qpair failed and we were unable to recover it. 00:25:58.309 [2024-12-11 15:02:40.771743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.309 [2024-12-11 15:02:40.771770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.309 qpair failed and we were unable to recover it. 00:25:58.309 [2024-12-11 15:02:40.771885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.309 [2024-12-11 15:02:40.771919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.309 qpair failed and we were unable to recover it. 00:25:58.309 [2024-12-11 15:02:40.772035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.309 [2024-12-11 15:02:40.772070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.309 qpair failed and we were unable to recover it. 00:25:58.309 [2024-12-11 15:02:40.772201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.309 [2024-12-11 15:02:40.772243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.309 qpair failed and we were unable to recover it. 00:25:58.309 [2024-12-11 15:02:40.772375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.309 [2024-12-11 15:02:40.772405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.309 qpair failed and we were unable to recover it. 00:25:58.309 [2024-12-11 15:02:40.772529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.309 [2024-12-11 15:02:40.772568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.309 qpair failed and we were unable to recover it. 00:25:58.309 [2024-12-11 15:02:40.772748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.310 [2024-12-11 15:02:40.772799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.310 qpair failed and we were unable to recover it. 00:25:58.310 [2024-12-11 15:02:40.772940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.310 [2024-12-11 15:02:40.772989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.310 qpair failed and we were unable to recover it. 00:25:58.310 [2024-12-11 15:02:40.773082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.310 [2024-12-11 15:02:40.773110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.310 qpair failed and we were unable to recover it. 00:25:58.310 [2024-12-11 15:02:40.773257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.310 [2024-12-11 15:02:40.773286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.310 qpair failed and we were unable to recover it. 00:25:58.310 [2024-12-11 15:02:40.773412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.310 [2024-12-11 15:02:40.773439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.310 qpair failed and we were unable to recover it. 00:25:58.310 [2024-12-11 15:02:40.773557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.310 [2024-12-11 15:02:40.773613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.310 qpair failed and we were unable to recover it. 00:25:58.310 [2024-12-11 15:02:40.773730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.310 [2024-12-11 15:02:40.773765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.310 qpair failed and we were unable to recover it. 00:25:58.310 [2024-12-11 15:02:40.773907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.310 [2024-12-11 15:02:40.773943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.310 qpair failed and we were unable to recover it. 00:25:58.310 [2024-12-11 15:02:40.774088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.310 [2024-12-11 15:02:40.774124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.310 qpair failed and we were unable to recover it. 00:25:58.310 [2024-12-11 15:02:40.774268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.310 [2024-12-11 15:02:40.774303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.310 qpair failed and we were unable to recover it. 00:25:58.310 [2024-12-11 15:02:40.774447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.310 [2024-12-11 15:02:40.774484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.310 qpair failed and we were unable to recover it. 00:25:58.310 [2024-12-11 15:02:40.774618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.310 [2024-12-11 15:02:40.774646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.310 qpair failed and we were unable to recover it. 00:25:58.310 [2024-12-11 15:02:40.774748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.310 [2024-12-11 15:02:40.774786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.310 qpair failed and we were unable to recover it. 00:25:58.310 [2024-12-11 15:02:40.774921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.310 [2024-12-11 15:02:40.774956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.310 qpair failed and we were unable to recover it. 00:25:58.310 [2024-12-11 15:02:40.775085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.310 [2024-12-11 15:02:40.775120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.310 qpair failed and we were unable to recover it. 00:25:58.310 [2024-12-11 15:02:40.775260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.310 [2024-12-11 15:02:40.775295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.310 qpair failed and we were unable to recover it. 00:25:58.310 [2024-12-11 15:02:40.775472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.310 [2024-12-11 15:02:40.775507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.310 qpair failed and we were unable to recover it. 00:25:58.310 [2024-12-11 15:02:40.775641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.310 [2024-12-11 15:02:40.775690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.310 qpair failed and we were unable to recover it. 00:25:58.310 [2024-12-11 15:02:40.775811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.310 [2024-12-11 15:02:40.775842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.310 qpair failed and we were unable to recover it. 00:25:58.310 [2024-12-11 15:02:40.775964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.310 [2024-12-11 15:02:40.776015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.310 qpair failed and we were unable to recover it. 00:25:58.310 [2024-12-11 15:02:40.776162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.310 [2024-12-11 15:02:40.776207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.310 qpair failed and we were unable to recover it. 00:25:58.310 [2024-12-11 15:02:40.776352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.310 [2024-12-11 15:02:40.776381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.310 qpair failed and we were unable to recover it. 00:25:58.310 [2024-12-11 15:02:40.776471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.310 [2024-12-11 15:02:40.776499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.310 qpair failed and we were unable to recover it. 00:25:58.310 [2024-12-11 15:02:40.776597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.310 [2024-12-11 15:02:40.776626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.310 qpair failed and we were unable to recover it. 00:25:58.310 [2024-12-11 15:02:40.776770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.310 [2024-12-11 15:02:40.776819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.310 qpair failed and we were unable to recover it. 00:25:58.310 [2024-12-11 15:02:40.776915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.310 [2024-12-11 15:02:40.776945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.310 qpair failed and we were unable to recover it. 00:25:58.310 [2024-12-11 15:02:40.777092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.310 [2024-12-11 15:02:40.777139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.310 qpair failed and we were unable to recover it. 00:25:58.310 [2024-12-11 15:02:40.777287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.310 [2024-12-11 15:02:40.777315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.310 qpair failed and we were unable to recover it. 00:25:58.310 [2024-12-11 15:02:40.777438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.310 [2024-12-11 15:02:40.777466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.310 qpair failed and we were unable to recover it. 00:25:58.310 [2024-12-11 15:02:40.777608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.310 [2024-12-11 15:02:40.777648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.310 qpair failed and we were unable to recover it. 00:25:58.310 [2024-12-11 15:02:40.777803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.310 [2024-12-11 15:02:40.777852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.310 qpair failed and we were unable to recover it. 00:25:58.310 [2024-12-11 15:02:40.777963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.310 [2024-12-11 15:02:40.778012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.310 qpair failed and we were unable to recover it. 00:25:58.310 [2024-12-11 15:02:40.778119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.310 [2024-12-11 15:02:40.778147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.310 qpair failed and we were unable to recover it. 00:25:58.310 [2024-12-11 15:02:40.778276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.310 [2024-12-11 15:02:40.778305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.310 qpair failed and we were unable to recover it. 00:25:58.310 [2024-12-11 15:02:40.778421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.310 [2024-12-11 15:02:40.778449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.310 qpair failed and we were unable to recover it. 00:25:58.310 [2024-12-11 15:02:40.778598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.310 [2024-12-11 15:02:40.778627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.310 qpair failed and we were unable to recover it. 00:25:58.310 [2024-12-11 15:02:40.778749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.310 [2024-12-11 15:02:40.778779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.310 qpair failed and we were unable to recover it. 00:25:58.310 [2024-12-11 15:02:40.778940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.310 [2024-12-11 15:02:40.778980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.310 qpair failed and we were unable to recover it. 00:25:58.310 [2024-12-11 15:02:40.779114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.310 [2024-12-11 15:02:40.779144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.310 qpair failed and we were unable to recover it. 00:25:58.310 [2024-12-11 15:02:40.779240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.311 [2024-12-11 15:02:40.779268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.311 qpair failed and we were unable to recover it. 00:25:58.311 [2024-12-11 15:02:40.779409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.311 [2024-12-11 15:02:40.779437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.311 qpair failed and we were unable to recover it. 00:25:58.311 [2024-12-11 15:02:40.779582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.311 [2024-12-11 15:02:40.779618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.311 qpair failed and we were unable to recover it. 00:25:58.311 [2024-12-11 15:02:40.779752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.311 [2024-12-11 15:02:40.779785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.311 qpair failed and we were unable to recover it. 00:25:58.311 [2024-12-11 15:02:40.779932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.311 [2024-12-11 15:02:40.779975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.311 qpair failed and we were unable to recover it. 00:25:58.311 [2024-12-11 15:02:40.780144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.311 [2024-12-11 15:02:40.780190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.311 qpair failed and we were unable to recover it. 00:25:58.311 [2024-12-11 15:02:40.780320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.311 [2024-12-11 15:02:40.780354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.311 qpair failed and we were unable to recover it. 00:25:58.311 [2024-12-11 15:02:40.780514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.311 [2024-12-11 15:02:40.780542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.311 qpair failed and we were unable to recover it. 00:25:58.311 [2024-12-11 15:02:40.780695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.311 [2024-12-11 15:02:40.780742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.311 qpair failed and we were unable to recover it. 00:25:58.311 [2024-12-11 15:02:40.780855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.311 [2024-12-11 15:02:40.780905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.311 qpair failed and we were unable to recover it. 00:25:58.311 [2024-12-11 15:02:40.781047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.311 [2024-12-11 15:02:40.781096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.311 qpair failed and we were unable to recover it. 00:25:58.311 [2024-12-11 15:02:40.781188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.311 [2024-12-11 15:02:40.781217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.311 qpair failed and we were unable to recover it. 00:25:58.311 [2024-12-11 15:02:40.781332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.311 [2024-12-11 15:02:40.781360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.311 qpair failed and we were unable to recover it. 00:25:58.311 [2024-12-11 15:02:40.781475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.311 [2024-12-11 15:02:40.781503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.311 qpair failed and we were unable to recover it. 00:25:58.311 [2024-12-11 15:02:40.781602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.311 [2024-12-11 15:02:40.781631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.311 qpair failed and we were unable to recover it. 00:25:58.311 [2024-12-11 15:02:40.781729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.311 [2024-12-11 15:02:40.781758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.311 qpair failed and we were unable to recover it. 00:25:58.311 [2024-12-11 15:02:40.781848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.311 [2024-12-11 15:02:40.781876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.311 qpair failed and we were unable to recover it. 00:25:58.311 [2024-12-11 15:02:40.781998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.311 [2024-12-11 15:02:40.782026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.311 qpair failed and we were unable to recover it. 00:25:58.311 [2024-12-11 15:02:40.782139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.311 [2024-12-11 15:02:40.782166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.311 qpair failed and we were unable to recover it. 00:25:58.311 [2024-12-11 15:02:40.782249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.311 [2024-12-11 15:02:40.782277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.311 qpair failed and we were unable to recover it. 00:25:58.311 [2024-12-11 15:02:40.782404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.311 [2024-12-11 15:02:40.782432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.311 qpair failed and we were unable to recover it. 00:25:58.311 [2024-12-11 15:02:40.782557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.311 [2024-12-11 15:02:40.782586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.311 qpair failed and we were unable to recover it. 00:25:58.311 [2024-12-11 15:02:40.782681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.311 [2024-12-11 15:02:40.782709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.311 qpair failed and we were unable to recover it. 00:25:58.311 [2024-12-11 15:02:40.782861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.311 [2024-12-11 15:02:40.782889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.311 qpair failed and we were unable to recover it. 00:25:58.311 [2024-12-11 15:02:40.783041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.311 [2024-12-11 15:02:40.783069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.311 qpair failed and we were unable to recover it. 00:25:58.311 [2024-12-11 15:02:40.783194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.311 [2024-12-11 15:02:40.783221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.311 qpair failed and we were unable to recover it. 00:25:58.311 [2024-12-11 15:02:40.783324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.311 [2024-12-11 15:02:40.783352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.311 qpair failed and we were unable to recover it. 00:25:58.311 [2024-12-11 15:02:40.783477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.311 [2024-12-11 15:02:40.783505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.311 qpair failed and we were unable to recover it. 00:25:58.311 [2024-12-11 15:02:40.783630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.311 [2024-12-11 15:02:40.783658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.311 qpair failed and we were unable to recover it. 00:25:58.311 [2024-12-11 15:02:40.783752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.311 [2024-12-11 15:02:40.783781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.311 qpair failed and we were unable to recover it. 00:25:58.311 [2024-12-11 15:02:40.783931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.311 [2024-12-11 15:02:40.783959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.311 qpair failed and we were unable to recover it. 00:25:58.311 [2024-12-11 15:02:40.784046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.311 [2024-12-11 15:02:40.784075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.311 qpair failed and we were unable to recover it. 00:25:58.311 [2024-12-11 15:02:40.784202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.311 [2024-12-11 15:02:40.784230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.311 qpair failed and we were unable to recover it. 00:25:58.311 [2024-12-11 15:02:40.784381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.311 [2024-12-11 15:02:40.784409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.311 qpair failed and we were unable to recover it. 00:25:58.311 [2024-12-11 15:02:40.784499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.311 [2024-12-11 15:02:40.784528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.311 qpair failed and we were unable to recover it. 00:25:58.311 [2024-12-11 15:02:40.784690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.311 [2024-12-11 15:02:40.784738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.311 qpair failed and we were unable to recover it. 00:25:58.311 [2024-12-11 15:02:40.784877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.311 [2024-12-11 15:02:40.784931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.311 qpair failed and we were unable to recover it. 00:25:58.311 [2024-12-11 15:02:40.785055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.311 [2024-12-11 15:02:40.785083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.311 qpair failed and we were unable to recover it. 00:25:58.311 [2024-12-11 15:02:40.785217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.311 [2024-12-11 15:02:40.785246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.311 qpair failed and we were unable to recover it. 00:25:58.311 [2024-12-11 15:02:40.785366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.312 [2024-12-11 15:02:40.785394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.312 qpair failed and we were unable to recover it. 00:25:58.312 [2024-12-11 15:02:40.785519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.312 [2024-12-11 15:02:40.785553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.312 qpair failed and we were unable to recover it. 00:25:58.312 [2024-12-11 15:02:40.785687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.312 [2024-12-11 15:02:40.785715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.312 qpair failed and we were unable to recover it. 00:25:58.312 [2024-12-11 15:02:40.785819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.312 [2024-12-11 15:02:40.785848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.312 qpair failed and we were unable to recover it. 00:25:58.312 [2024-12-11 15:02:40.785971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.312 [2024-12-11 15:02:40.785998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.312 qpair failed and we were unable to recover it. 00:25:58.312 [2024-12-11 15:02:40.786082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.312 [2024-12-11 15:02:40.786110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.312 qpair failed and we were unable to recover it. 00:25:58.312 [2024-12-11 15:02:40.786252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.312 [2024-12-11 15:02:40.786280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.312 qpair failed and we were unable to recover it. 00:25:58.312 [2024-12-11 15:02:40.786426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.312 [2024-12-11 15:02:40.786461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.312 qpair failed and we were unable to recover it. 00:25:58.312 [2024-12-11 15:02:40.786604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.312 [2024-12-11 15:02:40.786654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.312 qpair failed and we were unable to recover it. 00:25:58.312 [2024-12-11 15:02:40.786777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.312 [2024-12-11 15:02:40.786826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.312 qpair failed and we were unable to recover it. 00:25:58.312 [2024-12-11 15:02:40.786917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.312 [2024-12-11 15:02:40.786944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.312 qpair failed and we were unable to recover it. 00:25:58.312 [2024-12-11 15:02:40.787069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.312 [2024-12-11 15:02:40.787096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.312 qpair failed and we were unable to recover it. 00:25:58.312 [2024-12-11 15:02:40.787191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.312 [2024-12-11 15:02:40.787219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.312 qpair failed and we were unable to recover it. 00:25:58.312 [2024-12-11 15:02:40.787313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.312 [2024-12-11 15:02:40.787340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.312 qpair failed and we were unable to recover it. 00:25:58.312 [2024-12-11 15:02:40.787421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.312 [2024-12-11 15:02:40.787449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.312 qpair failed and we were unable to recover it. 00:25:58.312 [2024-12-11 15:02:40.787548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.312 [2024-12-11 15:02:40.787577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.312 qpair failed and we were unable to recover it. 00:25:58.312 [2024-12-11 15:02:40.787706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.312 [2024-12-11 15:02:40.787734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.312 qpair failed and we were unable to recover it. 00:25:58.312 [2024-12-11 15:02:40.787824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.312 [2024-12-11 15:02:40.787854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.312 qpair failed and we were unable to recover it. 00:25:58.312 [2024-12-11 15:02:40.787955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.312 [2024-12-11 15:02:40.787982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.312 qpair failed and we were unable to recover it. 00:25:58.312 [2024-12-11 15:02:40.788080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.312 [2024-12-11 15:02:40.788107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.312 qpair failed and we were unable to recover it. 00:25:58.312 [2024-12-11 15:02:40.788206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.312 [2024-12-11 15:02:40.788234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.312 qpair failed and we were unable to recover it. 00:25:58.312 [2024-12-11 15:02:40.788351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.312 [2024-12-11 15:02:40.788378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.312 qpair failed and we were unable to recover it. 00:25:58.312 [2024-12-11 15:02:40.788478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.312 [2024-12-11 15:02:40.788521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.312 qpair failed and we were unable to recover it. 00:25:58.312 [2024-12-11 15:02:40.788641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.312 [2024-12-11 15:02:40.788672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.312 qpair failed and we were unable to recover it. 00:25:58.312 [2024-12-11 15:02:40.788819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.312 [2024-12-11 15:02:40.788848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.312 qpair failed and we were unable to recover it. 00:25:58.312 [2024-12-11 15:02:40.788969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.312 [2024-12-11 15:02:40.788999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.312 qpair failed and we were unable to recover it. 00:25:58.312 [2024-12-11 15:02:40.789127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.312 [2024-12-11 15:02:40.789156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.312 qpair failed and we were unable to recover it. 00:25:58.312 [2024-12-11 15:02:40.789245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.312 [2024-12-11 15:02:40.789273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.312 qpair failed and we were unable to recover it. 00:25:58.312 [2024-12-11 15:02:40.789418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.312 [2024-12-11 15:02:40.789448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.312 qpair failed and we were unable to recover it. 00:25:58.312 [2024-12-11 15:02:40.789538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.312 [2024-12-11 15:02:40.789575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.312 qpair failed and we were unable to recover it. 00:25:58.312 [2024-12-11 15:02:40.789685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.312 [2024-12-11 15:02:40.789713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.312 qpair failed and we were unable to recover it. 00:25:58.312 [2024-12-11 15:02:40.789807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.312 [2024-12-11 15:02:40.789834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.312 qpair failed and we were unable to recover it. 00:25:58.312 [2024-12-11 15:02:40.789950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.312 [2024-12-11 15:02:40.789975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.312 qpair failed and we were unable to recover it. 00:25:58.312 [2024-12-11 15:02:40.790064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.312 [2024-12-11 15:02:40.790090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.312 qpair failed and we were unable to recover it. 00:25:58.312 [2024-12-11 15:02:40.790173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.312 [2024-12-11 15:02:40.790204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.312 qpair failed and we were unable to recover it. 00:25:58.313 [2024-12-11 15:02:40.790319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.313 [2024-12-11 15:02:40.790345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.313 qpair failed and we were unable to recover it. 00:25:58.313 [2024-12-11 15:02:40.790444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.313 [2024-12-11 15:02:40.790482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.313 qpair failed and we were unable to recover it. 00:25:58.313 [2024-12-11 15:02:40.790582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.313 [2024-12-11 15:02:40.790612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.313 qpair failed and we were unable to recover it. 00:25:58.313 [2024-12-11 15:02:40.790735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.313 [2024-12-11 15:02:40.790772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.313 qpair failed and we were unable to recover it. 00:25:58.313 [2024-12-11 15:02:40.790859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.313 [2024-12-11 15:02:40.790886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.313 qpair failed and we were unable to recover it. 00:25:58.313 [2024-12-11 15:02:40.791009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.313 [2024-12-11 15:02:40.791037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.313 qpair failed and we were unable to recover it. 00:25:58.313 [2024-12-11 15:02:40.791145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.313 [2024-12-11 15:02:40.791172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.313 qpair failed and we were unable to recover it. 00:25:58.313 [2024-12-11 15:02:40.791286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.313 [2024-12-11 15:02:40.791312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.313 qpair failed and we were unable to recover it. 00:25:58.313 [2024-12-11 15:02:40.791399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.313 [2024-12-11 15:02:40.791425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.313 qpair failed and we were unable to recover it. 00:25:58.313 [2024-12-11 15:02:40.791531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.313 [2024-12-11 15:02:40.791562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.313 qpair failed and we were unable to recover it. 00:25:58.313 [2024-12-11 15:02:40.791755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.313 [2024-12-11 15:02:40.791781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.313 qpair failed and we were unable to recover it. 00:25:58.313 [2024-12-11 15:02:40.791914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.313 [2024-12-11 15:02:40.791942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.313 qpair failed and we were unable to recover it. 00:25:58.313 [2024-12-11 15:02:40.792099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.313 [2024-12-11 15:02:40.792125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.313 qpair failed and we were unable to recover it. 00:25:58.313 [2024-12-11 15:02:40.792217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.313 [2024-12-11 15:02:40.792243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.313 qpair failed and we were unable to recover it. 00:25:58.313 [2024-12-11 15:02:40.792350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.313 [2024-12-11 15:02:40.792376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.313 qpair failed and we were unable to recover it. 00:25:58.313 [2024-12-11 15:02:40.792492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.313 [2024-12-11 15:02:40.792518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.313 qpair failed and we were unable to recover it. 00:25:58.313 [2024-12-11 15:02:40.792608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.313 [2024-12-11 15:02:40.792652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.313 qpair failed and we were unable to recover it. 00:25:58.313 [2024-12-11 15:02:40.792754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.313 [2024-12-11 15:02:40.792780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.313 qpair failed and we were unable to recover it. 00:25:58.313 [2024-12-11 15:02:40.792894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.313 [2024-12-11 15:02:40.792920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.313 qpair failed and we were unable to recover it. 00:25:58.313 [2024-12-11 15:02:40.793028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.313 [2024-12-11 15:02:40.793054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.313 qpair failed and we were unable to recover it. 00:25:58.313 [2024-12-11 15:02:40.793152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.313 [2024-12-11 15:02:40.793179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.313 qpair failed and we were unable to recover it. 00:25:58.313 [2024-12-11 15:02:40.793258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.313 [2024-12-11 15:02:40.793285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.313 qpair failed and we were unable to recover it. 00:25:58.313 [2024-12-11 15:02:40.793359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.313 [2024-12-11 15:02:40.793384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.313 qpair failed and we were unable to recover it. 00:25:58.313 [2024-12-11 15:02:40.793468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.313 [2024-12-11 15:02:40.793494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.313 qpair failed and we were unable to recover it. 00:25:58.313 [2024-12-11 15:02:40.793602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.313 [2024-12-11 15:02:40.793629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.313 qpair failed and we were unable to recover it. 00:25:58.313 [2024-12-11 15:02:40.793739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.313 [2024-12-11 15:02:40.793765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.313 qpair failed and we were unable to recover it. 00:25:58.313 [2024-12-11 15:02:40.793860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.313 [2024-12-11 15:02:40.793887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.313 qpair failed and we were unable to recover it. 00:25:58.313 [2024-12-11 15:02:40.793968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.313 [2024-12-11 15:02:40.793995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.313 qpair failed and we were unable to recover it. 00:25:58.313 [2024-12-11 15:02:40.794075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.313 [2024-12-11 15:02:40.794101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.313 qpair failed and we were unable to recover it. 00:25:58.313 [2024-12-11 15:02:40.794210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.313 [2024-12-11 15:02:40.794236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.313 qpair failed and we were unable to recover it. 00:25:58.313 [2024-12-11 15:02:40.794355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.313 [2024-12-11 15:02:40.794381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.313 qpair failed and we were unable to recover it. 00:25:58.313 [2024-12-11 15:02:40.794468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.313 [2024-12-11 15:02:40.794494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.313 qpair failed and we were unable to recover it. 00:25:58.313 [2024-12-11 15:02:40.794612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.313 [2024-12-11 15:02:40.794639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.313 qpair failed and we were unable to recover it. 00:25:58.313 [2024-12-11 15:02:40.794725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.313 [2024-12-11 15:02:40.794753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.313 qpair failed and we were unable to recover it. 00:25:58.313 [2024-12-11 15:02:40.794861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.313 [2024-12-11 15:02:40.794887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.313 qpair failed and we were unable to recover it. 00:25:58.313 [2024-12-11 15:02:40.795006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.313 [2024-12-11 15:02:40.795032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.313 qpair failed and we were unable to recover it. 00:25:58.313 [2024-12-11 15:02:40.795178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.313 [2024-12-11 15:02:40.795203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.313 qpair failed and we were unable to recover it. 00:25:58.313 [2024-12-11 15:02:40.795331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.313 [2024-12-11 15:02:40.795357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.313 qpair failed and we were unable to recover it. 00:25:58.313 [2024-12-11 15:02:40.795449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.313 [2024-12-11 15:02:40.795476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.313 qpair failed and we were unable to recover it. 00:25:58.314 [2024-12-11 15:02:40.795593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.314 [2024-12-11 15:02:40.795624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.314 qpair failed and we were unable to recover it. 00:25:58.314 [2024-12-11 15:02:40.795713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.314 [2024-12-11 15:02:40.795739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.314 qpair failed and we were unable to recover it. 00:25:58.314 [2024-12-11 15:02:40.795874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.314 [2024-12-11 15:02:40.795903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.314 qpair failed and we were unable to recover it. 00:25:58.314 [2024-12-11 15:02:40.796031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.314 [2024-12-11 15:02:40.796057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.314 qpair failed and we were unable to recover it. 00:25:58.314 [2024-12-11 15:02:40.796171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.314 [2024-12-11 15:02:40.796196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.314 qpair failed and we were unable to recover it. 00:25:58.314 [2024-12-11 15:02:40.796317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.314 [2024-12-11 15:02:40.796344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.314 qpair failed and we were unable to recover it. 00:25:58.314 [2024-12-11 15:02:40.796429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.314 [2024-12-11 15:02:40.796455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.314 qpair failed and we were unable to recover it. 00:25:58.314 [2024-12-11 15:02:40.796586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.314 [2024-12-11 15:02:40.796615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.314 qpair failed and we were unable to recover it. 00:25:58.314 [2024-12-11 15:02:40.796718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.314 [2024-12-11 15:02:40.796745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.314 qpair failed and we were unable to recover it. 00:25:58.314 [2024-12-11 15:02:40.796859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.314 [2024-12-11 15:02:40.796884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.314 qpair failed and we were unable to recover it. 00:25:58.314 [2024-12-11 15:02:40.796967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.314 [2024-12-11 15:02:40.796994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.314 qpair failed and we were unable to recover it. 00:25:58.314 [2024-12-11 15:02:40.797116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.314 [2024-12-11 15:02:40.797144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.314 qpair failed and we were unable to recover it. 00:25:58.314 [2024-12-11 15:02:40.797259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.314 [2024-12-11 15:02:40.797285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.314 qpair failed and we were unable to recover it. 00:25:58.314 [2024-12-11 15:02:40.797403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.314 [2024-12-11 15:02:40.797429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.314 qpair failed and we were unable to recover it. 00:25:58.314 [2024-12-11 15:02:40.797522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.314 [2024-12-11 15:02:40.797553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.314 qpair failed and we were unable to recover it. 00:25:58.314 [2024-12-11 15:02:40.797643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.314 [2024-12-11 15:02:40.797669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.314 qpair failed and we were unable to recover it. 00:25:58.314 [2024-12-11 15:02:40.797782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.314 [2024-12-11 15:02:40.797808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.314 qpair failed and we were unable to recover it. 00:25:58.314 [2024-12-11 15:02:40.797953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.314 [2024-12-11 15:02:40.797979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.314 qpair failed and we were unable to recover it. 00:25:58.314 [2024-12-11 15:02:40.798060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.314 [2024-12-11 15:02:40.798087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.314 qpair failed and we were unable to recover it. 00:25:58.314 [2024-12-11 15:02:40.798222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.314 [2024-12-11 15:02:40.798248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.314 qpair failed and we were unable to recover it. 00:25:58.314 [2024-12-11 15:02:40.798339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.314 [2024-12-11 15:02:40.798366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.314 qpair failed and we were unable to recover it. 00:25:58.314 [2024-12-11 15:02:40.798490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.314 [2024-12-11 15:02:40.798516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.314 qpair failed and we were unable to recover it. 00:25:58.314 [2024-12-11 15:02:40.798608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.314 [2024-12-11 15:02:40.798634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.314 qpair failed and we were unable to recover it. 00:25:58.314 [2024-12-11 15:02:40.798727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.314 [2024-12-11 15:02:40.798755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.314 qpair failed and we were unable to recover it. 00:25:58.314 [2024-12-11 15:02:40.798896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.314 [2024-12-11 15:02:40.798923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.314 qpair failed and we were unable to recover it. 00:25:58.314 [2024-12-11 15:02:40.799009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.314 [2024-12-11 15:02:40.799036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.314 qpair failed and we were unable to recover it. 00:25:58.314 [2024-12-11 15:02:40.799184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.314 [2024-12-11 15:02:40.799210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.314 qpair failed and we were unable to recover it. 00:25:58.314 [2024-12-11 15:02:40.799332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.314 [2024-12-11 15:02:40.799358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.314 qpair failed and we were unable to recover it. 00:25:58.314 [2024-12-11 15:02:40.799441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.314 [2024-12-11 15:02:40.799467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.314 qpair failed and we were unable to recover it. 00:25:58.314 [2024-12-11 15:02:40.799590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.314 [2024-12-11 15:02:40.799618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.314 qpair failed and we were unable to recover it. 00:25:58.314 [2024-12-11 15:02:40.799706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.314 [2024-12-11 15:02:40.799731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.314 qpair failed and we were unable to recover it. 00:25:58.314 [2024-12-11 15:02:40.799842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.314 [2024-12-11 15:02:40.799868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.314 qpair failed and we were unable to recover it. 00:25:58.314 [2024-12-11 15:02:40.799957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.314 [2024-12-11 15:02:40.799983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.314 qpair failed and we were unable to recover it. 00:25:58.314 [2024-12-11 15:02:40.800095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.314 [2024-12-11 15:02:40.800122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.314 qpair failed and we were unable to recover it. 00:25:58.314 [2024-12-11 15:02:40.800212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.314 [2024-12-11 15:02:40.800238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.314 qpair failed and we were unable to recover it. 00:25:58.314 [2024-12-11 15:02:40.800328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.314 [2024-12-11 15:02:40.800354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.314 qpair failed and we were unable to recover it. 00:25:58.314 [2024-12-11 15:02:40.800465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.314 [2024-12-11 15:02:40.800491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.314 qpair failed and we were unable to recover it. 00:25:58.314 [2024-12-11 15:02:40.800618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.314 [2024-12-11 15:02:40.800645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.314 qpair failed and we were unable to recover it. 00:25:58.314 [2024-12-11 15:02:40.800759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.314 [2024-12-11 15:02:40.800785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.314 qpair failed and we were unable to recover it. 00:25:58.315 [2024-12-11 15:02:40.800930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.315 [2024-12-11 15:02:40.800956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.315 qpair failed and we were unable to recover it. 00:25:58.315 [2024-12-11 15:02:40.801045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.315 [2024-12-11 15:02:40.801075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.315 qpair failed and we were unable to recover it. 00:25:58.315 [2024-12-11 15:02:40.801165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.315 [2024-12-11 15:02:40.801191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.315 qpair failed and we were unable to recover it. 00:25:58.315 [2024-12-11 15:02:40.801312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.315 [2024-12-11 15:02:40.801338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.315 qpair failed and we were unable to recover it. 00:25:58.315 [2024-12-11 15:02:40.801454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.315 [2024-12-11 15:02:40.801480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.315 qpair failed and we were unable to recover it. 00:25:58.315 [2024-12-11 15:02:40.801570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.315 [2024-12-11 15:02:40.801615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.315 qpair failed and we were unable to recover it. 00:25:58.315 [2024-12-11 15:02:40.801753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.315 [2024-12-11 15:02:40.801779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.315 qpair failed and we were unable to recover it. 00:25:58.315 [2024-12-11 15:02:40.801892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.315 [2024-12-11 15:02:40.801918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.315 qpair failed and we were unable to recover it. 00:25:58.315 [2024-12-11 15:02:40.802011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.315 [2024-12-11 15:02:40.802039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.315 qpair failed and we were unable to recover it. 00:25:58.315 [2024-12-11 15:02:40.802179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.315 [2024-12-11 15:02:40.802205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.315 qpair failed and we were unable to recover it. 00:25:58.315 [2024-12-11 15:02:40.802304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.315 [2024-12-11 15:02:40.802341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.315 qpair failed and we were unable to recover it. 00:25:58.315 [2024-12-11 15:02:40.802454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.315 [2024-12-11 15:02:40.802490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.315 qpair failed and we were unable to recover it. 00:25:58.315 [2024-12-11 15:02:40.802624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.315 [2024-12-11 15:02:40.802661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0de0000b90 with addr=10.0.0.2, port=4420 00:25:58.315 qpair failed and we were unable to recover it. 00:25:58.315 [2024-12-11 15:02:40.802753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.315 [2024-12-11 15:02:40.802797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.315 qpair failed and we were unable to recover it. 00:25:58.315 [2024-12-11 15:02:40.802909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.315 [2024-12-11 15:02:40.802952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.315 qpair failed and we were unable to recover it. 00:25:58.315 [2024-12-11 15:02:40.803094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.315 [2024-12-11 15:02:40.803120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.315 qpair failed and we were unable to recover it. 00:25:58.315 [2024-12-11 15:02:40.803206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.315 [2024-12-11 15:02:40.803232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.315 qpair failed and we were unable to recover it. 00:25:58.315 [2024-12-11 15:02:40.803352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.315 [2024-12-11 15:02:40.803378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.315 qpair failed and we were unable to recover it. 00:25:58.315 [2024-12-11 15:02:40.803494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.315 [2024-12-11 15:02:40.803521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.315 qpair failed and we were unable to recover it. 00:25:58.315 [2024-12-11 15:02:40.803627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.315 [2024-12-11 15:02:40.803654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.315 qpair failed and we were unable to recover it. 00:25:58.315 [2024-12-11 15:02:40.803768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.315 [2024-12-11 15:02:40.803794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.315 qpair failed and we were unable to recover it. 00:25:58.315 [2024-12-11 15:02:40.803910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.315 [2024-12-11 15:02:40.803936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.315 qpair failed and we were unable to recover it. 00:25:58.315 [2024-12-11 15:02:40.804050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.315 [2024-12-11 15:02:40.804076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.315 qpair failed and we were unable to recover it. 00:25:58.315 [2024-12-11 15:02:40.804191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.315 [2024-12-11 15:02:40.804217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.315 qpair failed and we were unable to recover it. 00:25:58.315 [2024-12-11 15:02:40.804312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.315 [2024-12-11 15:02:40.804337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.315 qpair failed and we were unable to recover it. 00:25:58.315 [2024-12-11 15:02:40.804448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.315 [2024-12-11 15:02:40.804474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.315 qpair failed and we were unable to recover it. 00:25:58.315 [2024-12-11 15:02:40.804589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.315 [2024-12-11 15:02:40.804616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.315 qpair failed and we were unable to recover it. 00:25:58.315 [2024-12-11 15:02:40.804729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.315 [2024-12-11 15:02:40.804755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.315 qpair failed and we were unable to recover it. 00:25:58.315 [2024-12-11 15:02:40.804871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.315 [2024-12-11 15:02:40.804897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.315 qpair failed and we were unable to recover it. 00:25:58.315 [2024-12-11 15:02:40.805020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.315 [2024-12-11 15:02:40.805046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.315 qpair failed and we were unable to recover it. 00:25:58.315 [2024-12-11 15:02:40.805159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.315 [2024-12-11 15:02:40.805185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.315 qpair failed and we were unable to recover it. 00:25:58.315 [2024-12-11 15:02:40.805328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.315 [2024-12-11 15:02:40.805353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.315 qpair failed and we were unable to recover it. 00:25:58.315 [2024-12-11 15:02:40.805473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.315 [2024-12-11 15:02:40.805500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.315 qpair failed and we were unable to recover it. 00:25:58.315 [2024-12-11 15:02:40.805589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.315 [2024-12-11 15:02:40.805616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.315 qpair failed and we were unable to recover it. 00:25:58.315 [2024-12-11 15:02:40.805699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.315 [2024-12-11 15:02:40.805725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.315 qpair failed and we were unable to recover it. 00:25:58.315 [2024-12-11 15:02:40.805846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.315 [2024-12-11 15:02:40.805872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.315 qpair failed and we were unable to recover it. 00:25:58.315 [2024-12-11 15:02:40.805987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.315 [2024-12-11 15:02:40.806013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.315 qpair failed and we were unable to recover it. 00:25:58.315 [2024-12-11 15:02:40.806096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.315 [2024-12-11 15:02:40.806122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.315 qpair failed and we were unable to recover it. 00:25:58.315 [2024-12-11 15:02:40.806220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.315 [2024-12-11 15:02:40.806247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.315 qpair failed and we were unable to recover it. 00:25:58.316 [2024-12-11 15:02:40.806384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.316 [2024-12-11 15:02:40.806410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.316 qpair failed and we were unable to recover it. 00:25:58.316 [2024-12-11 15:02:40.806601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.316 [2024-12-11 15:02:40.806629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.316 qpair failed and we were unable to recover it. 00:25:58.316 [2024-12-11 15:02:40.806732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.316 [2024-12-11 15:02:40.806763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.316 qpair failed and we were unable to recover it. 00:25:58.316 [2024-12-11 15:02:40.806848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.316 [2024-12-11 15:02:40.806876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.316 qpair failed and we were unable to recover it. 00:25:58.316 [2024-12-11 15:02:40.806963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.316 [2024-12-11 15:02:40.806988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.316 qpair failed and we were unable to recover it. 00:25:58.316 [2024-12-11 15:02:40.807135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.316 [2024-12-11 15:02:40.807161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.316 qpair failed and we were unable to recover it. 00:25:58.316 [2024-12-11 15:02:40.807272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.316 [2024-12-11 15:02:40.807298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.316 qpair failed and we were unable to recover it. 00:25:58.316 [2024-12-11 15:02:40.807400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.316 [2024-12-11 15:02:40.807427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.316 qpair failed and we were unable to recover it. 00:25:58.316 [2024-12-11 15:02:40.807540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.316 [2024-12-11 15:02:40.807575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.316 qpair failed and we were unable to recover it. 00:25:58.316 [2024-12-11 15:02:40.807667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.316 [2024-12-11 15:02:40.807693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.316 qpair failed and we were unable to recover it. 00:25:58.316 [2024-12-11 15:02:40.807785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.316 [2024-12-11 15:02:40.807811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.316 qpair failed and we were unable to recover it. 00:25:58.316 [2024-12-11 15:02:40.807932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.316 [2024-12-11 15:02:40.807958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.316 qpair failed and we were unable to recover it. 00:25:58.316 [2024-12-11 15:02:40.808048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.316 [2024-12-11 15:02:40.808074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.316 qpair failed and we were unable to recover it. 00:25:58.316 [2024-12-11 15:02:40.808170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.316 [2024-12-11 15:02:40.808202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0de0000b90 with addr=10.0.0.2, port=4420 00:25:58.316 qpair failed and we were unable to recover it. 00:25:58.316 [2024-12-11 15:02:40.808356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.316 [2024-12-11 15:02:40.808384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0de0000b90 with addr=10.0.0.2, port=4420 00:25:58.316 qpair failed and we were unable to recover it. 00:25:58.316 [2024-12-11 15:02:40.808473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.316 [2024-12-11 15:02:40.808499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0de0000b90 with addr=10.0.0.2, port=4420 00:25:58.316 qpair failed and we were unable to recover it. 00:25:58.316 [2024-12-11 15:02:40.808623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.316 [2024-12-11 15:02:40.808649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0de0000b90 with addr=10.0.0.2, port=4420 00:25:58.316 qpair failed and we were unable to recover it. 00:25:58.316 [2024-12-11 15:02:40.808751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.316 [2024-12-11 15:02:40.808777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0de0000b90 with addr=10.0.0.2, port=4420 00:25:58.316 qpair failed and we were unable to recover it. 00:25:58.316 [2024-12-11 15:02:40.808892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.316 [2024-12-11 15:02:40.808917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0de0000b90 with addr=10.0.0.2, port=4420 00:25:58.316 qpair failed and we were unable to recover it. 00:25:58.316 [2024-12-11 15:02:40.809003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.316 [2024-12-11 15:02:40.809031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.316 qpair failed and we were unable to recover it. 00:25:58.316 [2024-12-11 15:02:40.809128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.316 [2024-12-11 15:02:40.809155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.316 qpair failed and we were unable to recover it. 00:25:58.316 [2024-12-11 15:02:40.809275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.316 [2024-12-11 15:02:40.809300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.316 qpair failed and we were unable to recover it. 00:25:58.316 [2024-12-11 15:02:40.809382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.316 [2024-12-11 15:02:40.809407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.316 qpair failed and we were unable to recover it. 00:25:58.316 [2024-12-11 15:02:40.809526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.316 [2024-12-11 15:02:40.809558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.316 qpair failed and we were unable to recover it. 00:25:58.316 [2024-12-11 15:02:40.809671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.316 [2024-12-11 15:02:40.809696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.316 qpair failed and we were unable to recover it. 00:25:58.316 [2024-12-11 15:02:40.809776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.316 [2024-12-11 15:02:40.809801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.316 qpair failed and we were unable to recover it. 00:25:58.316 [2024-12-11 15:02:40.809922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.316 [2024-12-11 15:02:40.809948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.316 qpair failed and we were unable to recover it. 00:25:58.316 [2024-12-11 15:02:40.810067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.316 [2024-12-11 15:02:40.810093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.316 qpair failed and we were unable to recover it. 00:25:58.316 [2024-12-11 15:02:40.810184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.316 [2024-12-11 15:02:40.810210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.316 qpair failed and we were unable to recover it. 00:25:58.316 [2024-12-11 15:02:40.810327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.316 [2024-12-11 15:02:40.810353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.316 qpair failed and we were unable to recover it. 00:25:58.316 [2024-12-11 15:02:40.810468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.316 [2024-12-11 15:02:40.810508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.316 qpair failed and we were unable to recover it. 00:25:58.316 [2024-12-11 15:02:40.810651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.316 [2024-12-11 15:02:40.810679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.316 qpair failed and we were unable to recover it. 00:25:58.316 [2024-12-11 15:02:40.810795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.316 [2024-12-11 15:02:40.810831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.316 qpair failed and we were unable to recover it. 00:25:58.316 [2024-12-11 15:02:40.810959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.316 [2024-12-11 15:02:40.810985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.316 qpair failed and we were unable to recover it. 00:25:58.316 [2024-12-11 15:02:40.811074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.316 [2024-12-11 15:02:40.811100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.316 qpair failed and we were unable to recover it. 00:25:58.316 [2024-12-11 15:02:40.811217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.316 [2024-12-11 15:02:40.811244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.316 qpair failed and we were unable to recover it. 00:25:58.316 [2024-12-11 15:02:40.811323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.316 [2024-12-11 15:02:40.811350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.316 qpair failed and we were unable to recover it. 00:25:58.316 [2024-12-11 15:02:40.811469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.316 [2024-12-11 15:02:40.811495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.316 qpair failed and we were unable to recover it. 00:25:58.316 [2024-12-11 15:02:40.811620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.316 [2024-12-11 15:02:40.811647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.316 qpair failed and we were unable to recover it. 00:25:58.317 [2024-12-11 15:02:40.811774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.317 [2024-12-11 15:02:40.811801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.317 qpair failed and we were unable to recover it. 00:25:58.317 [2024-12-11 15:02:40.811911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.317 [2024-12-11 15:02:40.811937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.317 qpair failed and we were unable to recover it. 00:25:58.317 [2024-12-11 15:02:40.812049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.317 [2024-12-11 15:02:40.812075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.317 qpair failed and we were unable to recover it. 00:25:58.317 [2024-12-11 15:02:40.812187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.317 [2024-12-11 15:02:40.812218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.317 qpair failed and we were unable to recover it. 00:25:58.317 [2024-12-11 15:02:40.812410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.317 [2024-12-11 15:02:40.812436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.317 qpair failed and we were unable to recover it. 00:25:58.317 [2024-12-11 15:02:40.812577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.317 [2024-12-11 15:02:40.812604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.317 qpair failed and we were unable to recover it. 00:25:58.317 [2024-12-11 15:02:40.812693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.317 [2024-12-11 15:02:40.812719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.317 qpair failed and we were unable to recover it. 00:25:58.317 [2024-12-11 15:02:40.812801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.317 [2024-12-11 15:02:40.812826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.317 qpair failed and we were unable to recover it. 00:25:58.317 [2024-12-11 15:02:40.812908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.317 [2024-12-11 15:02:40.812933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.317 qpair failed and we were unable to recover it. 00:25:58.317 [2024-12-11 15:02:40.813074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.317 [2024-12-11 15:02:40.813100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.317 qpair failed and we were unable to recover it. 00:25:58.317 [2024-12-11 15:02:40.813209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.317 [2024-12-11 15:02:40.813235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.317 qpair failed and we were unable to recover it. 00:25:58.317 [2024-12-11 15:02:40.813353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.317 [2024-12-11 15:02:40.813379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.317 qpair failed and we were unable to recover it. 00:25:58.317 [2024-12-11 15:02:40.813495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.317 [2024-12-11 15:02:40.813521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.317 qpair failed and we were unable to recover it. 00:25:58.317 [2024-12-11 15:02:40.813608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.317 [2024-12-11 15:02:40.813635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.317 qpair failed and we were unable to recover it. 00:25:58.317 [2024-12-11 15:02:40.813772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.317 [2024-12-11 15:02:40.813798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.317 qpair failed and we were unable to recover it. 00:25:58.317 [2024-12-11 15:02:40.813882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.317 [2024-12-11 15:02:40.813907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.317 qpair failed and we were unable to recover it. 00:25:58.317 [2024-12-11 15:02:40.813981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.317 [2024-12-11 15:02:40.814007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.317 qpair failed and we were unable to recover it. 00:25:58.317 [2024-12-11 15:02:40.814136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.317 [2024-12-11 15:02:40.814163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.317 qpair failed and we were unable to recover it. 00:25:58.317 [2024-12-11 15:02:40.814249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.317 [2024-12-11 15:02:40.814274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.317 qpair failed and we were unable to recover it. 00:25:58.317 [2024-12-11 15:02:40.814388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.317 [2024-12-11 15:02:40.814415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.317 qpair failed and we were unable to recover it. 00:25:58.317 [2024-12-11 15:02:40.814525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.317 [2024-12-11 15:02:40.814557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.317 qpair failed and we were unable to recover it. 00:25:58.317 [2024-12-11 15:02:40.814643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.317 [2024-12-11 15:02:40.814668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.317 qpair failed and we were unable to recover it. 00:25:58.317 [2024-12-11 15:02:40.814804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.317 [2024-12-11 15:02:40.814830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.317 qpair failed and we were unable to recover it. 00:25:58.317 [2024-12-11 15:02:40.814955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.317 [2024-12-11 15:02:40.814982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.317 qpair failed and we were unable to recover it. 00:25:58.317 [2024-12-11 15:02:40.815173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.317 [2024-12-11 15:02:40.815199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.317 qpair failed and we were unable to recover it. 00:25:58.317 [2024-12-11 15:02:40.815318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.317 [2024-12-11 15:02:40.815344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.317 qpair failed and we were unable to recover it. 00:25:58.317 [2024-12-11 15:02:40.815427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.317 [2024-12-11 15:02:40.815453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.317 qpair failed and we were unable to recover it. 00:25:58.317 [2024-12-11 15:02:40.815568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.317 [2024-12-11 15:02:40.815596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.317 qpair failed and we were unable to recover it. 00:25:58.317 [2024-12-11 15:02:40.815692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.317 [2024-12-11 15:02:40.815718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.317 qpair failed and we were unable to recover it. 00:25:58.317 [2024-12-11 15:02:40.815841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.317 [2024-12-11 15:02:40.815867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.317 qpair failed and we were unable to recover it. 00:25:58.317 [2024-12-11 15:02:40.815973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.317 [2024-12-11 15:02:40.816011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.317 qpair failed and we were unable to recover it. 00:25:58.317 [2024-12-11 15:02:40.816107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.317 [2024-12-11 15:02:40.816135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.317 qpair failed and we were unable to recover it. 00:25:58.317 [2024-12-11 15:02:40.816226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.317 [2024-12-11 15:02:40.816253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.317 qpair failed and we were unable to recover it. 00:25:58.317 [2024-12-11 15:02:40.816347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.317 [2024-12-11 15:02:40.816374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.317 qpair failed and we were unable to recover it. 00:25:58.317 [2024-12-11 15:02:40.816457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.317 [2024-12-11 15:02:40.816482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.318 qpair failed and we were unable to recover it. 00:25:58.318 [2024-12-11 15:02:40.816628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.318 [2024-12-11 15:02:40.816654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.318 qpair failed and we were unable to recover it. 00:25:58.318 [2024-12-11 15:02:40.816744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.318 [2024-12-11 15:02:40.816769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.318 qpair failed and we were unable to recover it. 00:25:58.318 [2024-12-11 15:02:40.816879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.318 [2024-12-11 15:02:40.816912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.318 qpair failed and we were unable to recover it. 00:25:58.318 [2024-12-11 15:02:40.817060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.318 [2024-12-11 15:02:40.817095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.318 qpair failed and we were unable to recover it. 00:25:58.318 [2024-12-11 15:02:40.817210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.318 [2024-12-11 15:02:40.817238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.318 qpair failed and we were unable to recover it. 00:25:58.318 [2024-12-11 15:02:40.817429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.318 [2024-12-11 15:02:40.817455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.318 qpair failed and we were unable to recover it. 00:25:58.318 [2024-12-11 15:02:40.817604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.318 [2024-12-11 15:02:40.817631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.318 qpair failed and we were unable to recover it. 00:25:58.318 [2024-12-11 15:02:40.817715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.318 [2024-12-11 15:02:40.817741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.318 qpair failed and we were unable to recover it. 00:25:58.318 [2024-12-11 15:02:40.817851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.318 [2024-12-11 15:02:40.817877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.318 qpair failed and we were unable to recover it. 00:25:58.318 [2024-12-11 15:02:40.817970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.318 [2024-12-11 15:02:40.817998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.318 qpair failed and we were unable to recover it. 00:25:58.318 [2024-12-11 15:02:40.818115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.318 [2024-12-11 15:02:40.818141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.318 qpair failed and we were unable to recover it. 00:25:58.318 [2024-12-11 15:02:40.818255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.318 [2024-12-11 15:02:40.818282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.318 qpair failed and we were unable to recover it. 00:25:58.318 [2024-12-11 15:02:40.818401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.318 [2024-12-11 15:02:40.818427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.318 qpair failed and we were unable to recover it. 00:25:58.318 [2024-12-11 15:02:40.818539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.318 [2024-12-11 15:02:40.818571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.318 qpair failed and we were unable to recover it. 00:25:58.318 [2024-12-11 15:02:40.818678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.318 [2024-12-11 15:02:40.818704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.318 qpair failed and we were unable to recover it. 00:25:58.318 [2024-12-11 15:02:40.818854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.318 [2024-12-11 15:02:40.818880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.318 qpair failed and we were unable to recover it. 00:25:58.318 [2024-12-11 15:02:40.818969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.318 [2024-12-11 15:02:40.818996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.318 qpair failed and we were unable to recover it. 00:25:58.318 [2024-12-11 15:02:40.819110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.318 [2024-12-11 15:02:40.819136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.318 qpair failed and we were unable to recover it. 00:25:58.318 [2024-12-11 15:02:40.819276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.318 [2024-12-11 15:02:40.819302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.318 qpair failed and we were unable to recover it. 00:25:58.318 [2024-12-11 15:02:40.819439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.318 [2024-12-11 15:02:40.819465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.318 qpair failed and we were unable to recover it. 00:25:58.318 [2024-12-11 15:02:40.819557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.318 [2024-12-11 15:02:40.819584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.318 qpair failed and we were unable to recover it. 00:25:58.318 [2024-12-11 15:02:40.819681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.318 [2024-12-11 15:02:40.819707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.318 qpair failed and we were unable to recover it. 00:25:58.318 [2024-12-11 15:02:40.819842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.318 [2024-12-11 15:02:40.819869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.318 qpair failed and we were unable to recover it. 00:25:58.318 [2024-12-11 15:02:40.819985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.318 [2024-12-11 15:02:40.820011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.318 qpair failed and we were unable to recover it. 00:25:58.318 [2024-12-11 15:02:40.820103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.318 [2024-12-11 15:02:40.820129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.318 qpair failed and we were unable to recover it. 00:25:58.318 [2024-12-11 15:02:40.820208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.318 [2024-12-11 15:02:40.820234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.318 qpair failed and we were unable to recover it. 00:25:58.318 [2024-12-11 15:02:40.820341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.318 [2024-12-11 15:02:40.820368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.318 qpair failed and we were unable to recover it. 00:25:58.318 [2024-12-11 15:02:40.820462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.318 [2024-12-11 15:02:40.820488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.318 qpair failed and we were unable to recover it. 00:25:58.318 [2024-12-11 15:02:40.820679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.318 [2024-12-11 15:02:40.820706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.318 qpair failed and we were unable to recover it. 00:25:58.318 [2024-12-11 15:02:40.820795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.318 [2024-12-11 15:02:40.820821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.318 qpair failed and we were unable to recover it. 00:25:58.318 [2024-12-11 15:02:40.820937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.318 [2024-12-11 15:02:40.820963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.318 qpair failed and we were unable to recover it. 00:25:58.318 [2024-12-11 15:02:40.821074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.318 [2024-12-11 15:02:40.821100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.318 qpair failed and we were unable to recover it. 00:25:58.318 [2024-12-11 15:02:40.821215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.318 [2024-12-11 15:02:40.821240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.318 qpair failed and we were unable to recover it. 00:25:58.318 [2024-12-11 15:02:40.821333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.318 [2024-12-11 15:02:40.821361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.318 qpair failed and we were unable to recover it. 00:25:58.318 [2024-12-11 15:02:40.821480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.318 [2024-12-11 15:02:40.821506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.318 qpair failed and we were unable to recover it. 00:25:58.318 [2024-12-11 15:02:40.821635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.318 [2024-12-11 15:02:40.821666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.318 qpair failed and we were unable to recover it. 00:25:58.318 [2024-12-11 15:02:40.821754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.318 [2024-12-11 15:02:40.821781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.318 qpair failed and we were unable to recover it. 00:25:58.318 [2024-12-11 15:02:40.821864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.318 [2024-12-11 15:02:40.821890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.318 qpair failed and we were unable to recover it. 00:25:58.318 [2024-12-11 15:02:40.822000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.318 [2024-12-11 15:02:40.822026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.319 qpair failed and we were unable to recover it. 00:25:58.319 [2024-12-11 15:02:40.822138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.319 [2024-12-11 15:02:40.822164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.319 qpair failed and we were unable to recover it. 00:25:58.319 [2024-12-11 15:02:40.822260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.319 [2024-12-11 15:02:40.822285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.319 qpair failed and we were unable to recover it. 00:25:58.319 [2024-12-11 15:02:40.822401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.319 [2024-12-11 15:02:40.822427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.319 qpair failed and we were unable to recover it. 00:25:58.319 [2024-12-11 15:02:40.822507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.319 [2024-12-11 15:02:40.822533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.319 qpair failed and we were unable to recover it. 00:25:58.319 [2024-12-11 15:02:40.822656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.319 [2024-12-11 15:02:40.822682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.319 qpair failed and we were unable to recover it. 00:25:58.319 [2024-12-11 15:02:40.822799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.319 [2024-12-11 15:02:40.822825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.319 qpair failed and we were unable to recover it. 00:25:58.319 [2024-12-11 15:02:40.823020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.319 [2024-12-11 15:02:40.823046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.319 qpair failed and we were unable to recover it. 00:25:58.319 [2024-12-11 15:02:40.823159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.319 [2024-12-11 15:02:40.823184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.319 qpair failed and we were unable to recover it. 00:25:58.319 [2024-12-11 15:02:40.823264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.319 [2024-12-11 15:02:40.823290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.319 qpair failed and we were unable to recover it. 00:25:58.319 [2024-12-11 15:02:40.823434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.319 [2024-12-11 15:02:40.823461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.319 qpair failed and we were unable to recover it. 00:25:58.319 [2024-12-11 15:02:40.823581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.319 [2024-12-11 15:02:40.823607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.319 qpair failed and we were unable to recover it. 00:25:58.319 [2024-12-11 15:02:40.823694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.319 [2024-12-11 15:02:40.823722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.319 qpair failed and we were unable to recover it. 00:25:58.319 [2024-12-11 15:02:40.823916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.319 [2024-12-11 15:02:40.823943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.319 qpair failed and we were unable to recover it. 00:25:58.319 [2024-12-11 15:02:40.824084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.319 [2024-12-11 15:02:40.824110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.319 qpair failed and we were unable to recover it. 00:25:58.319 [2024-12-11 15:02:40.824201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.319 [2024-12-11 15:02:40.824227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.319 qpair failed and we were unable to recover it. 00:25:58.319 [2024-12-11 15:02:40.824372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.319 [2024-12-11 15:02:40.824398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.319 qpair failed and we were unable to recover it. 00:25:58.319 [2024-12-11 15:02:40.824487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.319 [2024-12-11 15:02:40.824513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.319 qpair failed and we were unable to recover it. 00:25:58.319 [2024-12-11 15:02:40.824639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.319 [2024-12-11 15:02:40.824668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0de0000b90 with addr=10.0.0.2, port=4420 00:25:58.319 qpair failed and we were unable to recover it. 00:25:58.319 [2024-12-11 15:02:40.824765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.319 [2024-12-11 15:02:40.824792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0de0000b90 with addr=10.0.0.2, port=4420 00:25:58.319 qpair failed and we were unable to recover it. 00:25:58.319 [2024-12-11 15:02:40.824885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.319 [2024-12-11 15:02:40.824912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0de0000b90 with addr=10.0.0.2, port=4420 00:25:58.319 qpair failed and we were unable to recover it. 00:25:58.319 [2024-12-11 15:02:40.824999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.319 [2024-12-11 15:02:40.825024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0de0000b90 with addr=10.0.0.2, port=4420 00:25:58.319 qpair failed and we were unable to recover it. 00:25:58.319 [2024-12-11 15:02:40.825171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.319 [2024-12-11 15:02:40.825196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0de0000b90 with addr=10.0.0.2, port=4420 00:25:58.319 qpair failed and we were unable to recover it. 00:25:58.319 [2024-12-11 15:02:40.825290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.319 [2024-12-11 15:02:40.825316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0de0000b90 with addr=10.0.0.2, port=4420 00:25:58.319 qpair failed and we were unable to recover it. 00:25:58.319 [2024-12-11 15:02:40.825406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.319 [2024-12-11 15:02:40.825431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0de0000b90 with addr=10.0.0.2, port=4420 00:25:58.319 qpair failed and we were unable to recover it. 00:25:58.319 [2024-12-11 15:02:40.825553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.319 [2024-12-11 15:02:40.825579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0de0000b90 with addr=10.0.0.2, port=4420 00:25:58.319 qpair failed and we were unable to recover it. 00:25:58.319 [2024-12-11 15:02:40.825670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.319 [2024-12-11 15:02:40.825696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0de0000b90 with addr=10.0.0.2, port=4420 00:25:58.319 qpair failed and we were unable to recover it. 00:25:58.319 [2024-12-11 15:02:40.825795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.319 [2024-12-11 15:02:40.825822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.319 qpair failed and we were unable to recover it. 00:25:58.319 [2024-12-11 15:02:40.825972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.319 [2024-12-11 15:02:40.825999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.319 qpair failed and we were unable to recover it. 00:25:58.319 [2024-12-11 15:02:40.826080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.319 [2024-12-11 15:02:40.826106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.319 qpair failed and we were unable to recover it. 00:25:58.319 [2024-12-11 15:02:40.826219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.319 [2024-12-11 15:02:40.826245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.319 qpair failed and we were unable to recover it. 00:25:58.319 [2024-12-11 15:02:40.826330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.319 [2024-12-11 15:02:40.826356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.319 qpair failed and we were unable to recover it. 00:25:58.319 [2024-12-11 15:02:40.826514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.319 [2024-12-11 15:02:40.826540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.319 qpair failed and we were unable to recover it. 00:25:58.319 [2024-12-11 15:02:40.826633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.319 [2024-12-11 15:02:40.826660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.319 qpair failed and we were unable to recover it. 00:25:58.319 [2024-12-11 15:02:40.826781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.319 [2024-12-11 15:02:40.826807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.319 qpair failed and we were unable to recover it. 00:25:58.319 [2024-12-11 15:02:40.826966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.319 [2024-12-11 15:02:40.826995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.319 qpair failed and we were unable to recover it. 00:25:58.319 [2024-12-11 15:02:40.827136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.319 [2024-12-11 15:02:40.827164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.319 qpair failed and we were unable to recover it. 00:25:58.319 [2024-12-11 15:02:40.827276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.319 [2024-12-11 15:02:40.827324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.319 qpair failed and we were unable to recover it. 00:25:58.319 [2024-12-11 15:02:40.827463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.319 [2024-12-11 15:02:40.827490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.319 qpair failed and we were unable to recover it. 00:25:58.319 [2024-12-11 15:02:40.827614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.319 [2024-12-11 15:02:40.827649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0de0000b90 with addr=10.0.0.2, port=4420 00:25:58.320 qpair failed and we were unable to recover it. 00:25:58.320 [2024-12-11 15:02:40.827773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.320 [2024-12-11 15:02:40.827798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0de0000b90 with addr=10.0.0.2, port=4420 00:25:58.320 qpair failed and we were unable to recover it. 00:25:58.320 [2024-12-11 15:02:40.827913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.320 [2024-12-11 15:02:40.827938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0de0000b90 with addr=10.0.0.2, port=4420 00:25:58.320 qpair failed and we were unable to recover it. 00:25:58.320 [2024-12-11 15:02:40.828073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.320 [2024-12-11 15:02:40.828100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0de0000b90 with addr=10.0.0.2, port=4420 00:25:58.320 qpair failed and we were unable to recover it. 00:25:58.320 [2024-12-11 15:02:40.828227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.320 [2024-12-11 15:02:40.828255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0de0000b90 with addr=10.0.0.2, port=4420 00:25:58.320 qpair failed and we were unable to recover it. 00:25:58.320 [2024-12-11 15:02:40.828377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.320 [2024-12-11 15:02:40.828402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0de0000b90 with addr=10.0.0.2, port=4420 00:25:58.320 qpair failed and we were unable to recover it. 00:25:58.320 [2024-12-11 15:02:40.828551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.320 [2024-12-11 15:02:40.828578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0de0000b90 with addr=10.0.0.2, port=4420 00:25:58.320 qpair failed and we were unable to recover it. 00:25:58.320 [2024-12-11 15:02:40.828664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.320 [2024-12-11 15:02:40.828691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0de0000b90 with addr=10.0.0.2, port=4420 00:25:58.320 qpair failed and we were unable to recover it. 00:25:58.320 [2024-12-11 15:02:40.828780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.320 [2024-12-11 15:02:40.828805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0de0000b90 with addr=10.0.0.2, port=4420 00:25:58.320 qpair failed and we were unable to recover it. 00:25:58.320 [2024-12-11 15:02:40.828924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.320 [2024-12-11 15:02:40.828951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0de0000b90 with addr=10.0.0.2, port=4420 00:25:58.320 qpair failed and we were unable to recover it. 00:25:58.320 [2024-12-11 15:02:40.829046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.320 [2024-12-11 15:02:40.829072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0de0000b90 with addr=10.0.0.2, port=4420 00:25:58.320 qpair failed and we were unable to recover it. 00:25:58.320 [2024-12-11 15:02:40.829195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.320 [2024-12-11 15:02:40.829222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0de0000b90 with addr=10.0.0.2, port=4420 00:25:58.320 qpair failed and we were unable to recover it. 00:25:58.320 [2024-12-11 15:02:40.829323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.320 [2024-12-11 15:02:40.829351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.320 qpair failed and we were unable to recover it. 00:25:58.320 [2024-12-11 15:02:40.829472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.320 [2024-12-11 15:02:40.829500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.320 qpair failed and we were unable to recover it. 00:25:58.320 [2024-12-11 15:02:40.829640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.320 [2024-12-11 15:02:40.829680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.320 qpair failed and we were unable to recover it. 00:25:58.320 [2024-12-11 15:02:40.829801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.320 [2024-12-11 15:02:40.829830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.320 qpair failed and we were unable to recover it. 00:25:58.320 [2024-12-11 15:02:40.829957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.320 [2024-12-11 15:02:40.829995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.320 qpair failed and we were unable to recover it. 00:25:58.320 [2024-12-11 15:02:40.830107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.320 [2024-12-11 15:02:40.830143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.320 qpair failed and we were unable to recover it. 00:25:58.320 [2024-12-11 15:02:40.830277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.320 [2024-12-11 15:02:40.830305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0de0000b90 with addr=10.0.0.2, port=4420 00:25:58.320 qpair failed and we were unable to recover it. 00:25:58.320 [2024-12-11 15:02:40.830426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.320 [2024-12-11 15:02:40.830452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0de0000b90 with addr=10.0.0.2, port=4420 00:25:58.320 qpair failed and we were unable to recover it. 00:25:58.320 [2024-12-11 15:02:40.830570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.320 [2024-12-11 15:02:40.830597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0de0000b90 with addr=10.0.0.2, port=4420 00:25:58.320 qpair failed and we were unable to recover it. 00:25:58.320 [2024-12-11 15:02:40.830680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.320 [2024-12-11 15:02:40.830705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0de0000b90 with addr=10.0.0.2, port=4420 00:25:58.320 qpair failed and we were unable to recover it. 00:25:58.320 [2024-12-11 15:02:40.830805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.320 [2024-12-11 15:02:40.830831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0de0000b90 with addr=10.0.0.2, port=4420 00:25:58.320 qpair failed and we were unable to recover it. 00:25:58.320 [2024-12-11 15:02:40.830977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.320 [2024-12-11 15:02:40.831003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0de0000b90 with addr=10.0.0.2, port=4420 00:25:58.320 qpair failed and we were unable to recover it. 00:25:58.320 [2024-12-11 15:02:40.831120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.320 [2024-12-11 15:02:40.831149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0de0000b90 with addr=10.0.0.2, port=4420 00:25:58.320 qpair failed and we were unable to recover it. 00:25:58.320 [2024-12-11 15:02:40.831272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.320 [2024-12-11 15:02:40.831300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0de0000b90 with addr=10.0.0.2, port=4420 00:25:58.320 qpair failed and we were unable to recover it. 00:25:58.320 [2024-12-11 15:02:40.831399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.320 [2024-12-11 15:02:40.831440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.320 qpair failed and we were unable to recover it. 00:25:58.320 [2024-12-11 15:02:40.831538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.320 [2024-12-11 15:02:40.831577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.320 qpair failed and we were unable to recover it. 00:25:58.320 [2024-12-11 15:02:40.831694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.320 [2024-12-11 15:02:40.831722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.320 qpair failed and we were unable to recover it. 00:25:58.320 [2024-12-11 15:02:40.831817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.320 [2024-12-11 15:02:40.831843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.320 qpair failed and we were unable to recover it. 00:25:58.320 [2024-12-11 15:02:40.831940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.320 [2024-12-11 15:02:40.831970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.320 qpair failed and we were unable to recover it. 00:25:58.320 [2024-12-11 15:02:40.832089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.320 [2024-12-11 15:02:40.832125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.320 qpair failed and we were unable to recover it. 00:25:58.320 [2024-12-11 15:02:40.832253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.320 [2024-12-11 15:02:40.832281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0de0000b90 with addr=10.0.0.2, port=4420 00:25:58.320 qpair failed and we were unable to recover it. 00:25:58.320 [2024-12-11 15:02:40.832414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.320 [2024-12-11 15:02:40.832455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.320 qpair failed and we were unable to recover it. 00:25:58.320 [2024-12-11 15:02:40.832583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.320 [2024-12-11 15:02:40.832613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.320 qpair failed and we were unable to recover it. 00:25:58.320 [2024-12-11 15:02:40.832731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.320 [2024-12-11 15:02:40.832759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.320 qpair failed and we were unable to recover it. 00:25:58.320 [2024-12-11 15:02:40.832902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.320 [2024-12-11 15:02:40.832929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.320 qpair failed and we were unable to recover it. 00:25:58.320 [2024-12-11 15:02:40.833032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.320 [2024-12-11 15:02:40.833060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.320 qpair failed and we were unable to recover it. 00:25:58.320 [2024-12-11 15:02:40.833154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.320 [2024-12-11 15:02:40.833188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.320 qpair failed and we were unable to recover it. 00:25:58.320 [2024-12-11 15:02:40.833304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.320 [2024-12-11 15:02:40.833331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.320 qpair failed and we were unable to recover it. 00:25:58.321 [2024-12-11 15:02:40.833454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.321 [2024-12-11 15:02:40.833481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.321 qpair failed and we were unable to recover it. 00:25:58.321 [2024-12-11 15:02:40.833586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.321 [2024-12-11 15:02:40.833617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.321 qpair failed and we were unable to recover it. 00:25:58.321 [2024-12-11 15:02:40.833700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.321 [2024-12-11 15:02:40.833728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.321 qpair failed and we were unable to recover it. 00:25:58.321 [2024-12-11 15:02:40.833819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.321 [2024-12-11 15:02:40.833847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.321 qpair failed and we were unable to recover it. 00:25:58.321 [2024-12-11 15:02:40.833986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.321 [2024-12-11 15:02:40.834017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.321 qpair failed and we were unable to recover it. 00:25:58.321 [2024-12-11 15:02:40.834127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.321 [2024-12-11 15:02:40.834163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.321 qpair failed and we were unable to recover it. 00:25:58.321 [2024-12-11 15:02:40.834280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.321 [2024-12-11 15:02:40.834316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.321 qpair failed and we were unable to recover it. 00:25:58.321 [2024-12-11 15:02:40.834461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.321 [2024-12-11 15:02:40.834489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.321 qpair failed and we were unable to recover it. 00:25:58.321 [2024-12-11 15:02:40.834603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.321 [2024-12-11 15:02:40.834631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.321 qpair failed and we were unable to recover it. 00:25:58.321 [2024-12-11 15:02:40.834767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.321 [2024-12-11 15:02:40.834804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.321 qpair failed and we were unable to recover it. 00:25:58.321 [2024-12-11 15:02:40.834971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.321 [2024-12-11 15:02:40.835022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.321 qpair failed and we were unable to recover it. 00:25:58.321 [2024-12-11 15:02:40.835183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.321 [2024-12-11 15:02:40.835234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.321 qpair failed and we were unable to recover it. 00:25:58.321 [2024-12-11 15:02:40.835390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.321 [2024-12-11 15:02:40.835418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.321 qpair failed and we were unable to recover it. 00:25:58.321 [2024-12-11 15:02:40.835584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.321 [2024-12-11 15:02:40.835612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.321 qpair failed and we were unable to recover it. 00:25:58.321 [2024-12-11 15:02:40.835755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.321 [2024-12-11 15:02:40.835782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.321 qpair failed and we were unable to recover it. 00:25:58.321 [2024-12-11 15:02:40.835905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.321 [2024-12-11 15:02:40.835932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.321 qpair failed and we were unable to recover it. 00:25:58.321 [2024-12-11 15:02:40.836057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.321 [2024-12-11 15:02:40.836088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.321 qpair failed and we were unable to recover it. 00:25:58.321 [2024-12-11 15:02:40.836178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.321 [2024-12-11 15:02:40.836205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.321 qpair failed and we were unable to recover it. 00:25:58.321 [2024-12-11 15:02:40.836322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.321 [2024-12-11 15:02:40.836349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.321 qpair failed and we were unable to recover it. 00:25:58.321 [2024-12-11 15:02:40.836492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.321 [2024-12-11 15:02:40.836529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.321 qpair failed and we were unable to recover it. 00:25:58.321 [2024-12-11 15:02:40.836683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.321 [2024-12-11 15:02:40.836719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.321 qpair failed and we were unable to recover it. 00:25:58.321 [2024-12-11 15:02:40.836836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.321 [2024-12-11 15:02:40.836873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.321 qpair failed and we were unable to recover it. 00:25:58.321 [2024-12-11 15:02:40.837015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.321 [2024-12-11 15:02:40.837051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.321 qpair failed and we were unable to recover it. 00:25:58.321 [2024-12-11 15:02:40.837162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.321 [2024-12-11 15:02:40.837200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.321 qpair failed and we were unable to recover it. 00:25:58.321 [2024-12-11 15:02:40.837339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.321 [2024-12-11 15:02:40.837376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.321 qpair failed and we were unable to recover it. 00:25:58.321 [2024-12-11 15:02:40.837507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.321 [2024-12-11 15:02:40.837542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.321 qpair failed and we were unable to recover it. 00:25:58.321 [2024-12-11 15:02:40.837672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.321 [2024-12-11 15:02:40.837699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.321 qpair failed and we were unable to recover it. 00:25:58.321 [2024-12-11 15:02:40.837841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.321 [2024-12-11 15:02:40.837869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.321 qpair failed and we were unable to recover it. 00:25:58.321 [2024-12-11 15:02:40.837963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.321 [2024-12-11 15:02:40.837991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.321 qpair failed and we were unable to recover it. 00:25:58.321 [2024-12-11 15:02:40.838102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.321 [2024-12-11 15:02:40.838128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.321 qpair failed and we were unable to recover it. 00:25:58.321 [2024-12-11 15:02:40.838248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.321 [2024-12-11 15:02:40.838275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.321 qpair failed and we were unable to recover it. 00:25:58.321 [2024-12-11 15:02:40.838402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.321 [2024-12-11 15:02:40.838429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.321 qpair failed and we were unable to recover it. 00:25:58.321 [2024-12-11 15:02:40.838557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.321 [2024-12-11 15:02:40.838585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.321 qpair failed and we were unable to recover it. 00:25:58.321 [2024-12-11 15:02:40.838687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.321 [2024-12-11 15:02:40.838715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.321 qpair failed and we were unable to recover it. 00:25:58.322 [2024-12-11 15:02:40.838831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.322 [2024-12-11 15:02:40.838858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.322 qpair failed and we were unable to recover it. 00:25:58.322 [2024-12-11 15:02:40.838955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.322 [2024-12-11 15:02:40.838982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.322 qpair failed and we were unable to recover it. 00:25:58.322 [2024-12-11 15:02:40.839070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.322 [2024-12-11 15:02:40.839097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.322 qpair failed and we were unable to recover it. 00:25:58.322 [2024-12-11 15:02:40.839210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.322 [2024-12-11 15:02:40.839238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.322 qpair failed and we were unable to recover it. 00:25:58.322 [2024-12-11 15:02:40.839359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.322 [2024-12-11 15:02:40.839386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.322 qpair failed and we were unable to recover it. 00:25:58.322 [2024-12-11 15:02:40.839476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.322 [2024-12-11 15:02:40.839504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.322 qpair failed and we were unable to recover it. 00:25:58.322 [2024-12-11 15:02:40.839604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.322 [2024-12-11 15:02:40.839633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.322 qpair failed and we were unable to recover it. 00:25:58.322 [2024-12-11 15:02:40.839727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.322 [2024-12-11 15:02:40.839755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.322 qpair failed and we were unable to recover it. 00:25:58.322 [2024-12-11 15:02:40.839904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.322 [2024-12-11 15:02:40.839932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.322 qpair failed and we were unable to recover it. 00:25:58.322 [2024-12-11 15:02:40.840050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.322 [2024-12-11 15:02:40.840077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.322 qpair failed and we were unable to recover it. 00:25:58.322 [2024-12-11 15:02:40.840206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.322 [2024-12-11 15:02:40.840232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.322 qpair failed and we were unable to recover it. 00:25:58.322 [2024-12-11 15:02:40.840319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.322 [2024-12-11 15:02:40.840346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.322 qpair failed and we were unable to recover it. 00:25:58.322 [2024-12-11 15:02:40.840484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.322 [2024-12-11 15:02:40.840525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.322 qpair failed and we were unable to recover it. 00:25:58.322 [2024-12-11 15:02:40.840645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.322 [2024-12-11 15:02:40.840685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.322 qpair failed and we were unable to recover it. 00:25:58.322 [2024-12-11 15:02:40.840805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.322 [2024-12-11 15:02:40.840842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.322 qpair failed and we were unable to recover it. 00:25:58.322 [2024-12-11 15:02:40.840977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.322 [2024-12-11 15:02:40.841013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.322 qpair failed and we were unable to recover it. 00:25:58.322 [2024-12-11 15:02:40.841129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.322 [2024-12-11 15:02:40.841165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.322 qpair failed and we were unable to recover it. 00:25:58.322 [2024-12-11 15:02:40.841315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.322 [2024-12-11 15:02:40.841353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.322 qpair failed and we were unable to recover it. 00:25:58.322 [2024-12-11 15:02:40.841478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.322 [2024-12-11 15:02:40.841511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.322 qpair failed and we were unable to recover it. 00:25:58.322 [2024-12-11 15:02:40.841650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.322 [2024-12-11 15:02:40.841677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.322 qpair failed and we were unable to recover it. 00:25:58.322 [2024-12-11 15:02:40.841830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.322 [2024-12-11 15:02:40.841881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.322 qpair failed and we were unable to recover it. 00:25:58.322 [2024-12-11 15:02:40.842030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.322 [2024-12-11 15:02:40.842081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.322 qpair failed and we were unable to recover it. 00:25:58.322 [2024-12-11 15:02:40.842192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.322 [2024-12-11 15:02:40.842235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.322 qpair failed and we were unable to recover it. 00:25:58.322 [2024-12-11 15:02:40.842367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.322 [2024-12-11 15:02:40.842394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.322 qpair failed and we were unable to recover it. 00:25:58.322 [2024-12-11 15:02:40.842488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.322 [2024-12-11 15:02:40.842514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.322 qpair failed and we were unable to recover it. 00:25:58.322 [2024-12-11 15:02:40.842633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.322 [2024-12-11 15:02:40.842660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.322 qpair failed and we were unable to recover it. 00:25:58.322 [2024-12-11 15:02:40.842751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.322 [2024-12-11 15:02:40.842778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.322 qpair failed and we were unable to recover it. 00:25:58.322 [2024-12-11 15:02:40.842864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.322 [2024-12-11 15:02:40.842890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.322 qpair failed and we were unable to recover it. 00:25:58.322 [2024-12-11 15:02:40.843010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.322 [2024-12-11 15:02:40.843037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.322 qpair failed and we were unable to recover it. 00:25:58.322 [2024-12-11 15:02:40.843160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.322 [2024-12-11 15:02:40.843188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.322 qpair failed and we were unable to recover it. 00:25:58.322 [2024-12-11 15:02:40.843315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.322 [2024-12-11 15:02:40.843343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.322 qpair failed and we were unable to recover it. 00:25:58.322 [2024-12-11 15:02:40.843486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.322 [2024-12-11 15:02:40.843526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0de0000b90 with addr=10.0.0.2, port=4420 00:25:58.322 qpair failed and we were unable to recover it. 00:25:58.322 [2024-12-11 15:02:40.843644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.322 [2024-12-11 15:02:40.843674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0de0000b90 with addr=10.0.0.2, port=4420 00:25:58.322 qpair failed and we were unable to recover it. 00:25:58.322 [2024-12-11 15:02:40.843764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.322 [2024-12-11 15:02:40.843794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0de0000b90 with addr=10.0.0.2, port=4420 00:25:58.322 qpair failed and we were unable to recover it. 00:25:58.322 [2024-12-11 15:02:40.843908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.322 [2024-12-11 15:02:40.843935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0de0000b90 with addr=10.0.0.2, port=4420 00:25:58.322 qpair failed and we were unable to recover it. 00:25:58.322 [2024-12-11 15:02:40.844018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.322 [2024-12-11 15:02:40.844045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0de0000b90 with addr=10.0.0.2, port=4420 00:25:58.322 qpair failed and we were unable to recover it. 00:25:58.322 [2024-12-11 15:02:40.844139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.322 [2024-12-11 15:02:40.844165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0de0000b90 with addr=10.0.0.2, port=4420 00:25:58.322 qpair failed and we were unable to recover it. 00:25:58.322 [2024-12-11 15:02:40.844280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.322 [2024-12-11 15:02:40.844306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0de0000b90 with addr=10.0.0.2, port=4420 00:25:58.322 qpair failed and we were unable to recover it. 00:25:58.322 [2024-12-11 15:02:40.844394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.322 [2024-12-11 15:02:40.844419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0de0000b90 with addr=10.0.0.2, port=4420 00:25:58.322 qpair failed and we were unable to recover it. 00:25:58.323 [2024-12-11 15:02:40.844554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.323 [2024-12-11 15:02:40.844587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.323 qpair failed and we were unable to recover it. 00:25:58.323 [2024-12-11 15:02:40.844712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.323 [2024-12-11 15:02:40.844740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.323 qpair failed and we were unable to recover it. 00:25:58.323 [2024-12-11 15:02:40.844864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.323 [2024-12-11 15:02:40.844891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.323 qpair failed and we were unable to recover it. 00:25:58.323 [2024-12-11 15:02:40.844985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.323 [2024-12-11 15:02:40.845011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.323 qpair failed and we were unable to recover it. 00:25:58.323 [2024-12-11 15:02:40.845099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.323 [2024-12-11 15:02:40.845128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.323 qpair failed and we were unable to recover it. 00:25:58.323 [2024-12-11 15:02:40.845227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.323 [2024-12-11 15:02:40.845255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.323 qpair failed and we were unable to recover it. 00:25:58.323 [2024-12-11 15:02:40.845404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.323 [2024-12-11 15:02:40.845432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.323 qpair failed and we were unable to recover it. 00:25:58.323 [2024-12-11 15:02:40.845553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.323 [2024-12-11 15:02:40.845581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.323 qpair failed and we were unable to recover it. 00:25:58.323 [2024-12-11 15:02:40.845675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.323 [2024-12-11 15:02:40.845702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.323 qpair failed and we were unable to recover it. 00:25:58.323 [2024-12-11 15:02:40.845843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.323 [2024-12-11 15:02:40.845870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.323 qpair failed and we were unable to recover it. 00:25:58.323 [2024-12-11 15:02:40.846017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.323 [2024-12-11 15:02:40.846044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.323 qpair failed and we were unable to recover it. 00:25:58.323 [2024-12-11 15:02:40.846193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.323 [2024-12-11 15:02:40.846219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.323 qpair failed and we were unable to recover it. 00:25:58.323 [2024-12-11 15:02:40.846333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.323 [2024-12-11 15:02:40.846359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.323 qpair failed and we were unable to recover it. 00:25:58.323 [2024-12-11 15:02:40.846446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.323 [2024-12-11 15:02:40.846472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.323 qpair failed and we were unable to recover it. 00:25:58.323 [2024-12-11 15:02:40.846606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.323 [2024-12-11 15:02:40.846635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.323 qpair failed and we were unable to recover it. 00:25:58.323 [2024-12-11 15:02:40.846728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.323 [2024-12-11 15:02:40.846757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.323 qpair failed and we were unable to recover it. 00:25:58.323 [2024-12-11 15:02:40.846858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.323 [2024-12-11 15:02:40.846887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.323 qpair failed and we were unable to recover it. 00:25:58.323 [2024-12-11 15:02:40.846991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.323 [2024-12-11 15:02:40.847021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.323 qpair failed and we were unable to recover it. 00:25:58.323 [2024-12-11 15:02:40.847106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.323 [2024-12-11 15:02:40.847134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.323 qpair failed and we were unable to recover it. 00:25:58.323 [2024-12-11 15:02:40.847230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.323 [2024-12-11 15:02:40.847263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.323 qpair failed and we were unable to recover it. 00:25:58.323 [2024-12-11 15:02:40.847367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.323 [2024-12-11 15:02:40.847394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.323 qpair failed and we were unable to recover it. 00:25:58.323 [2024-12-11 15:02:40.847514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.323 [2024-12-11 15:02:40.847542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.323 qpair failed and we were unable to recover it. 00:25:58.323 [2024-12-11 15:02:40.847669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.323 [2024-12-11 15:02:40.847697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.323 qpair failed and we were unable to recover it. 00:25:58.323 [2024-12-11 15:02:40.847792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.323 [2024-12-11 15:02:40.847824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.323 qpair failed and we were unable to recover it. 00:25:58.323 [2024-12-11 15:02:40.847976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.323 [2024-12-11 15:02:40.848004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.323 qpair failed and we were unable to recover it. 00:25:58.323 [2024-12-11 15:02:40.848098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.323 [2024-12-11 15:02:40.848126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.323 qpair failed and we were unable to recover it. 00:25:58.323 [2024-12-11 15:02:40.848218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.323 [2024-12-11 15:02:40.848246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.323 qpair failed and we were unable to recover it. 00:25:58.323 [2024-12-11 15:02:40.848338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.323 [2024-12-11 15:02:40.848366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.323 qpair failed and we were unable to recover it. 00:25:58.323 [2024-12-11 15:02:40.848490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.323 [2024-12-11 15:02:40.848533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.323 qpair failed and we were unable to recover it. 00:25:58.323 [2024-12-11 15:02:40.848653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.323 [2024-12-11 15:02:40.848683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.323 qpair failed and we were unable to recover it. 00:25:58.323 [2024-12-11 15:02:40.848768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.323 [2024-12-11 15:02:40.848796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.323 qpair failed and we were unable to recover it. 00:25:58.323 [2024-12-11 15:02:40.848874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.323 [2024-12-11 15:02:40.848902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.323 qpair failed and we were unable to recover it. 00:25:58.323 [2024-12-11 15:02:40.848993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.323 [2024-12-11 15:02:40.849022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.323 qpair failed and we were unable to recover it. 00:25:58.323 [2024-12-11 15:02:40.849149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.323 [2024-12-11 15:02:40.849177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.323 qpair failed and we were unable to recover it. 00:25:58.323 [2024-12-11 15:02:40.849291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.323 [2024-12-11 15:02:40.849318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.323 qpair failed and we were unable to recover it. 00:25:58.323 [2024-12-11 15:02:40.849466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.323 [2024-12-11 15:02:40.849505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.323 qpair failed and we were unable to recover it. 00:25:58.323 [2024-12-11 15:02:40.849655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.323 [2024-12-11 15:02:40.849687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0de0000b90 with addr=10.0.0.2, port=4420 00:25:58.323 qpair failed and we were unable to recover it. 00:25:58.323 [2024-12-11 15:02:40.849805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.323 [2024-12-11 15:02:40.849833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0de0000b90 with addr=10.0.0.2, port=4420 00:25:58.323 qpair failed and we were unable to recover it. 00:25:58.323 [2024-12-11 15:02:40.849933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.323 [2024-12-11 15:02:40.849964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0de0000b90 with addr=10.0.0.2, port=4420 00:25:58.323 qpair failed and we were unable to recover it. 00:25:58.324 [2024-12-11 15:02:40.850089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.324 [2024-12-11 15:02:40.850116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0de0000b90 with addr=10.0.0.2, port=4420 00:25:58.324 qpair failed and we were unable to recover it. 00:25:58.324 [2024-12-11 15:02:40.850268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.324 [2024-12-11 15:02:40.850298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.324 qpair failed and we were unable to recover it. 00:25:58.324 [2024-12-11 15:02:40.850395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.324 [2024-12-11 15:02:40.850423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.324 qpair failed and we were unable to recover it. 00:25:58.324 [2024-12-11 15:02:40.850542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.324 [2024-12-11 15:02:40.850580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.324 qpair failed and we were unable to recover it. 00:25:58.324 [2024-12-11 15:02:40.850677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.324 [2024-12-11 15:02:40.850705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.324 qpair failed and we were unable to recover it. 00:25:58.324 [2024-12-11 15:02:40.850854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.324 [2024-12-11 15:02:40.850906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.324 qpair failed and we were unable to recover it. 00:25:58.324 [2024-12-11 15:02:40.851106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.324 [2024-12-11 15:02:40.851155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.324 qpair failed and we were unable to recover it. 00:25:58.324 [2024-12-11 15:02:40.851289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.324 [2024-12-11 15:02:40.851320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0de0000b90 with addr=10.0.0.2, port=4420 00:25:58.324 qpair failed and we were unable to recover it. 00:25:58.324 [2024-12-11 15:02:40.851422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.324 [2024-12-11 15:02:40.851449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0de0000b90 with addr=10.0.0.2, port=4420 00:25:58.324 qpair failed and we were unable to recover it. 00:25:58.324 [2024-12-11 15:02:40.851621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.324 [2024-12-11 15:02:40.851671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.324 qpair failed and we were unable to recover it. 00:25:58.324 [2024-12-11 15:02:40.851836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.324 [2024-12-11 15:02:40.851889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.324 qpair failed and we were unable to recover it. 00:25:58.324 [2024-12-11 15:02:40.852043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.324 [2024-12-11 15:02:40.852099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.324 qpair failed and we were unable to recover it. 00:25:58.324 [2024-12-11 15:02:40.852271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.324 [2024-12-11 15:02:40.852320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.324 qpair failed and we were unable to recover it. 00:25:58.324 [2024-12-11 15:02:40.852439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.324 [2024-12-11 15:02:40.852467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.324 qpair failed and we were unable to recover it. 00:25:58.324 [2024-12-11 15:02:40.852619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.324 [2024-12-11 15:02:40.852670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.324 qpair failed and we were unable to recover it. 00:25:58.324 [2024-12-11 15:02:40.852834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.324 [2024-12-11 15:02:40.852886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.324 qpair failed and we were unable to recover it. 00:25:58.324 [2024-12-11 15:02:40.852974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.324 [2024-12-11 15:02:40.853002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.324 qpair failed and we were unable to recover it. 00:25:58.324 [2024-12-11 15:02:40.853200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.324 [2024-12-11 15:02:40.853229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.324 qpair failed and we were unable to recover it. 00:25:58.324 [2024-12-11 15:02:40.853343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.324 [2024-12-11 15:02:40.853370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.324 qpair failed and we were unable to recover it. 00:25:58.324 [2024-12-11 15:02:40.853460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.324 [2024-12-11 15:02:40.853487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.324 qpair failed and we were unable to recover it. 00:25:58.324 [2024-12-11 15:02:40.853610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.324 [2024-12-11 15:02:40.853643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.324 qpair failed and we were unable to recover it. 00:25:58.324 [2024-12-11 15:02:40.853766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.324 [2024-12-11 15:02:40.853793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.324 qpair failed and we were unable to recover it. 00:25:58.324 [2024-12-11 15:02:40.853876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.324 [2024-12-11 15:02:40.853903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.324 qpair failed and we were unable to recover it. 00:25:58.324 [2024-12-11 15:02:40.854010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.324 [2024-12-11 15:02:40.854039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.324 qpair failed and we were unable to recover it. 00:25:58.324 [2024-12-11 15:02:40.854131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.324 [2024-12-11 15:02:40.854161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0de0000b90 with addr=10.0.0.2, port=4420 00:25:58.324 qpair failed and we were unable to recover it. 00:25:58.324 [2024-12-11 15:02:40.854250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.324 [2024-12-11 15:02:40.854284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0de0000b90 with addr=10.0.0.2, port=4420 00:25:58.324 qpair failed and we were unable to recover it. 00:25:58.324 [2024-12-11 15:02:40.854418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.324 [2024-12-11 15:02:40.854446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0de0000b90 with addr=10.0.0.2, port=4420 00:25:58.324 qpair failed and we were unable to recover it. 00:25:58.324 [2024-12-11 15:02:40.854574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.324 [2024-12-11 15:02:40.854601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0de0000b90 with addr=10.0.0.2, port=4420 00:25:58.324 qpair failed and we were unable to recover it. 00:25:58.324 [2024-12-11 15:02:40.854732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.324 [2024-12-11 15:02:40.854760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0de0000b90 with addr=10.0.0.2, port=4420 00:25:58.324 qpair failed and we were unable to recover it. 00:25:58.324 [2024-12-11 15:02:40.854881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.324 [2024-12-11 15:02:40.854909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0de0000b90 with addr=10.0.0.2, port=4420 00:25:58.324 qpair failed and we were unable to recover it. 00:25:58.324 [2024-12-11 15:02:40.855023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.324 [2024-12-11 15:02:40.855069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.324 qpair failed and we were unable to recover it. 00:25:58.324 [2024-12-11 15:02:40.855210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.324 [2024-12-11 15:02:40.855254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.324 qpair failed and we were unable to recover it. 00:25:58.324 [2024-12-11 15:02:40.855379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.324 [2024-12-11 15:02:40.855407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.324 qpair failed and we were unable to recover it. 00:25:58.324 [2024-12-11 15:02:40.855524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.324 [2024-12-11 15:02:40.855558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.324 qpair failed and we were unable to recover it. 00:25:58.324 [2024-12-11 15:02:40.855680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.324 [2024-12-11 15:02:40.855708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.324 qpair failed and we were unable to recover it. 00:25:58.324 [2024-12-11 15:02:40.855805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.324 [2024-12-11 15:02:40.855833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.324 qpair failed and we were unable to recover it. 00:25:58.324 [2024-12-11 15:02:40.855956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.324 [2024-12-11 15:02:40.855984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.324 qpair failed and we were unable to recover it. 00:25:58.324 [2024-12-11 15:02:40.856090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.324 [2024-12-11 15:02:40.856117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.324 qpair failed and we were unable to recover it. 00:25:58.324 [2024-12-11 15:02:40.856206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.324 [2024-12-11 15:02:40.856233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.324 qpair failed and we were unable to recover it. 00:25:58.325 [2024-12-11 15:02:40.856327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.325 [2024-12-11 15:02:40.856354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.325 qpair failed and we were unable to recover it. 00:25:58.325 [2024-12-11 15:02:40.856476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.325 [2024-12-11 15:02:40.856503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.325 qpair failed and we were unable to recover it. 00:25:58.325 [2024-12-11 15:02:40.856633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.325 [2024-12-11 15:02:40.856661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.325 qpair failed and we were unable to recover it. 00:25:58.325 [2024-12-11 15:02:40.856755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.325 [2024-12-11 15:02:40.856782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.325 qpair failed and we were unable to recover it. 00:25:58.325 [2024-12-11 15:02:40.856872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.325 [2024-12-11 15:02:40.856899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.325 qpair failed and we were unable to recover it. 00:25:58.325 [2024-12-11 15:02:40.856984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.325 [2024-12-11 15:02:40.857012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.325 qpair failed and we were unable to recover it. 00:25:58.325 [2024-12-11 15:02:40.857111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.325 [2024-12-11 15:02:40.857141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0de0000b90 with addr=10.0.0.2, port=4420 00:25:58.325 qpair failed and we were unable to recover it. 00:25:58.325 [2024-12-11 15:02:40.857288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.325 [2024-12-11 15:02:40.857316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0de0000b90 with addr=10.0.0.2, port=4420 00:25:58.325 qpair failed and we were unable to recover it. 00:25:58.325 [2024-12-11 15:02:40.857444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.325 [2024-12-11 15:02:40.857476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0de0000b90 with addr=10.0.0.2, port=4420 00:25:58.325 qpair failed and we were unable to recover it. 00:25:58.325 [2024-12-11 15:02:40.857625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.325 [2024-12-11 15:02:40.857653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0de0000b90 with addr=10.0.0.2, port=4420 00:25:58.325 qpair failed and we were unable to recover it. 00:25:58.325 [2024-12-11 15:02:40.857745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.325 [2024-12-11 15:02:40.857771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0de0000b90 with addr=10.0.0.2, port=4420 00:25:58.325 qpair failed and we were unable to recover it. 00:25:58.325 [2024-12-11 15:02:40.857865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.325 [2024-12-11 15:02:40.857892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0de0000b90 with addr=10.0.0.2, port=4420 00:25:58.325 qpair failed and we were unable to recover it. 00:25:58.325 [2024-12-11 15:02:40.858014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.325 [2024-12-11 15:02:40.858044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.325 qpair failed and we were unable to recover it. 00:25:58.325 [2024-12-11 15:02:40.858169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.325 [2024-12-11 15:02:40.858197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.325 qpair failed and we were unable to recover it. 00:25:58.325 [2024-12-11 15:02:40.858283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.325 [2024-12-11 15:02:40.858311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.325 qpair failed and we were unable to recover it. 00:25:58.325 [2024-12-11 15:02:40.858436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.325 [2024-12-11 15:02:40.858464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.325 qpair failed and we were unable to recover it. 00:25:58.325 [2024-12-11 15:02:40.858619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.325 [2024-12-11 15:02:40.858666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.325 qpair failed and we were unable to recover it. 00:25:58.325 [2024-12-11 15:02:40.858792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.325 [2024-12-11 15:02:40.858820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.325 qpair failed and we were unable to recover it. 00:25:58.325 [2024-12-11 15:02:40.858938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.325 [2024-12-11 15:02:40.858966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.325 qpair failed and we were unable to recover it. 00:25:58.325 [2024-12-11 15:02:40.859089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.325 [2024-12-11 15:02:40.859117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.325 qpair failed and we were unable to recover it. 00:25:58.325 [2024-12-11 15:02:40.859268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.325 [2024-12-11 15:02:40.859296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.325 qpair failed and we were unable to recover it. 00:25:58.325 [2024-12-11 15:02:40.859418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.325 [2024-12-11 15:02:40.859450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.325 qpair failed and we were unable to recover it. 00:25:58.325 [2024-12-11 15:02:40.859540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.325 [2024-12-11 15:02:40.859574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.325 qpair failed and we were unable to recover it. 00:25:58.325 [2024-12-11 15:02:40.859663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.325 [2024-12-11 15:02:40.859691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.325 qpair failed and we were unable to recover it. 00:25:58.325 [2024-12-11 15:02:40.859811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.325 [2024-12-11 15:02:40.859837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.325 qpair failed and we were unable to recover it. 00:25:58.325 [2024-12-11 15:02:40.859987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.325 [2024-12-11 15:02:40.860013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.325 qpair failed and we were unable to recover it. 00:25:58.325 [2024-12-11 15:02:40.860152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.325 [2024-12-11 15:02:40.860179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.325 qpair failed and we were unable to recover it. 00:25:58.325 [2024-12-11 15:02:40.860299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.325 [2024-12-11 15:02:40.860327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.325 qpair failed and we were unable to recover it. 00:25:58.325 [2024-12-11 15:02:40.860443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.325 [2024-12-11 15:02:40.860470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.325 qpair failed and we were unable to recover it. 00:25:58.325 [2024-12-11 15:02:40.860587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.325 [2024-12-11 15:02:40.860615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.325 qpair failed and we were unable to recover it. 00:25:58.325 [2024-12-11 15:02:40.860764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.325 [2024-12-11 15:02:40.860791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.325 qpair failed and we were unable to recover it. 00:25:58.325 [2024-12-11 15:02:40.860877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.325 [2024-12-11 15:02:40.860904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.325 qpair failed and we were unable to recover it. 00:25:58.325 [2024-12-11 15:02:40.861018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.325 [2024-12-11 15:02:40.861045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.325 qpair failed and we were unable to recover it. 00:25:58.325 [2024-12-11 15:02:40.861172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.325 [2024-12-11 15:02:40.861198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.325 qpair failed and we were unable to recover it. 00:25:58.325 [2024-12-11 15:02:40.861310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.325 [2024-12-11 15:02:40.861337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.325 qpair failed and we were unable to recover it. 00:25:58.325 [2024-12-11 15:02:40.861437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.325 [2024-12-11 15:02:40.861465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.325 qpair failed and we were unable to recover it. 00:25:58.325 [2024-12-11 15:02:40.861560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.325 [2024-12-11 15:02:40.861603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.325 qpair failed and we were unable to recover it. 00:25:58.325 [2024-12-11 15:02:40.861686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.325 [2024-12-11 15:02:40.861712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.325 qpair failed and we were unable to recover it. 00:25:58.325 [2024-12-11 15:02:40.861819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.325 [2024-12-11 15:02:40.861844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.325 qpair failed and we were unable to recover it. 00:25:58.326 [2024-12-11 15:02:40.861939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.326 [2024-12-11 15:02:40.861965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.326 qpair failed and we were unable to recover it. 00:25:58.326 [2024-12-11 15:02:40.862073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.326 [2024-12-11 15:02:40.862097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.326 qpair failed and we were unable to recover it. 00:25:58.326 [2024-12-11 15:02:40.862183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.326 [2024-12-11 15:02:40.862209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.326 qpair failed and we were unable to recover it. 00:25:58.326 [2024-12-11 15:02:40.862323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.326 [2024-12-11 15:02:40.862348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.326 qpair failed and we were unable to recover it. 00:25:58.326 [2024-12-11 15:02:40.862433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.326 [2024-12-11 15:02:40.862457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.326 qpair failed and we were unable to recover it. 00:25:58.326 [2024-12-11 15:02:40.862555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.326 [2024-12-11 15:02:40.862581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.326 qpair failed and we were unable to recover it. 00:25:58.326 [2024-12-11 15:02:40.862686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.326 [2024-12-11 15:02:40.862711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.326 qpair failed and we were unable to recover it. 00:25:58.326 [2024-12-11 15:02:40.862796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.326 [2024-12-11 15:02:40.862820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.326 qpair failed and we were unable to recover it. 00:25:58.326 [2024-12-11 15:02:40.862930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.326 [2024-12-11 15:02:40.862955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.326 qpair failed and we were unable to recover it. 00:25:58.326 [2024-12-11 15:02:40.863069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.326 [2024-12-11 15:02:40.863096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.326 qpair failed and we were unable to recover it. 00:25:58.326 [2024-12-11 15:02:40.863207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.326 [2024-12-11 15:02:40.863233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.326 qpair failed and we were unable to recover it. 00:25:58.326 [2024-12-11 15:02:40.863347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.326 [2024-12-11 15:02:40.863373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.326 qpair failed and we were unable to recover it. 00:25:58.326 [2024-12-11 15:02:40.863497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.326 [2024-12-11 15:02:40.863524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.326 qpair failed and we were unable to recover it. 00:25:58.326 [2024-12-11 15:02:40.863645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.326 [2024-12-11 15:02:40.863674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0de0000b90 with addr=10.0.0.2, port=4420 00:25:58.326 qpair failed and we were unable to recover it. 00:25:58.326 [2024-12-11 15:02:40.863766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.326 [2024-12-11 15:02:40.863792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0de0000b90 with addr=10.0.0.2, port=4420 00:25:58.326 qpair failed and we were unable to recover it. 00:25:58.326 [2024-12-11 15:02:40.863883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.326 [2024-12-11 15:02:40.863926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0de0000b90 with addr=10.0.0.2, port=4420 00:25:58.326 qpair failed and we were unable to recover it. 00:25:58.326 [2024-12-11 15:02:40.864061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.326 [2024-12-11 15:02:40.864086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0de0000b90 with addr=10.0.0.2, port=4420 00:25:58.326 qpair failed and we were unable to recover it. 00:25:58.326 [2024-12-11 15:02:40.864168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.326 [2024-12-11 15:02:40.864192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0de0000b90 with addr=10.0.0.2, port=4420 00:25:58.326 qpair failed and we were unable to recover it. 00:25:58.326 [2024-12-11 15:02:40.864310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.326 [2024-12-11 15:02:40.864335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0de0000b90 with addr=10.0.0.2, port=4420 00:25:58.326 qpair failed and we were unable to recover it. 00:25:58.326 [2024-12-11 15:02:40.864418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.326 [2024-12-11 15:02:40.864444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.326 qpair failed and we were unable to recover it. 00:25:58.326 [2024-12-11 15:02:40.864529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.326 [2024-12-11 15:02:40.864560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.326 qpair failed and we were unable to recover it. 00:25:58.326 [2024-12-11 15:02:40.864652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.326 [2024-12-11 15:02:40.864679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.326 qpair failed and we were unable to recover it. 00:25:58.326 [2024-12-11 15:02:40.864765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.326 [2024-12-11 15:02:40.864795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.326 qpair failed and we were unable to recover it. 00:25:58.326 [2024-12-11 15:02:40.864938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.326 [2024-12-11 15:02:40.864964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.326 qpair failed and we were unable to recover it. 00:25:58.326 [2024-12-11 15:02:40.865078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.326 [2024-12-11 15:02:40.865104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.326 qpair failed and we were unable to recover it. 00:25:58.326 [2024-12-11 15:02:40.865200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.326 [2024-12-11 15:02:40.865226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.326 qpair failed and we were unable to recover it. 00:25:58.326 [2024-12-11 15:02:40.865344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.326 [2024-12-11 15:02:40.865370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.326 qpair failed and we were unable to recover it. 00:25:58.326 [2024-12-11 15:02:40.865459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.326 [2024-12-11 15:02:40.865486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.326 qpair failed and we were unable to recover it. 00:25:58.326 [2024-12-11 15:02:40.865572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.326 [2024-12-11 15:02:40.865599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.326 qpair failed and we were unable to recover it. 00:25:58.326 [2024-12-11 15:02:40.865722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.326 [2024-12-11 15:02:40.865748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.326 qpair failed and we were unable to recover it. 00:25:58.326 [2024-12-11 15:02:40.865876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.326 [2024-12-11 15:02:40.865902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.326 qpair failed and we were unable to recover it. 00:25:58.326 [2024-12-11 15:02:40.865988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.326 [2024-12-11 15:02:40.866014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.326 qpair failed and we were unable to recover it. 00:25:58.326 [2024-12-11 15:02:40.866154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.326 [2024-12-11 15:02:40.866179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.326 qpair failed and we were unable to recover it. 00:25:58.326 [2024-12-11 15:02:40.866295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.326 [2024-12-11 15:02:40.866321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.326 qpair failed and we were unable to recover it. 00:25:58.326 [2024-12-11 15:02:40.866433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.326 [2024-12-11 15:02:40.866460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.327 qpair failed and we were unable to recover it. 00:25:58.327 [2024-12-11 15:02:40.866607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.327 [2024-12-11 15:02:40.866634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.327 qpair failed and we were unable to recover it. 00:25:58.327 [2024-12-11 15:02:40.866764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.327 [2024-12-11 15:02:40.866790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.327 qpair failed and we were unable to recover it. 00:25:58.327 [2024-12-11 15:02:40.866895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.327 [2024-12-11 15:02:40.866921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.327 qpair failed and we were unable to recover it. 00:25:58.327 [2024-12-11 15:02:40.867008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.327 [2024-12-11 15:02:40.867035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.327 qpair failed and we were unable to recover it. 00:25:58.327 [2024-12-11 15:02:40.867123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.327 [2024-12-11 15:02:40.867149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.327 qpair failed and we were unable to recover it. 00:25:58.327 [2024-12-11 15:02:40.867258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.327 [2024-12-11 15:02:40.867287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0de0000b90 with addr=10.0.0.2, port=4420 00:25:58.327 qpair failed and we were unable to recover it. 00:25:58.327 [2024-12-11 15:02:40.867407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.327 [2024-12-11 15:02:40.867433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0de0000b90 with addr=10.0.0.2, port=4420 00:25:58.327 qpair failed and we were unable to recover it. 00:25:58.327 [2024-12-11 15:02:40.867564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.327 [2024-12-11 15:02:40.867590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0de0000b90 with addr=10.0.0.2, port=4420 00:25:58.327 qpair failed and we were unable to recover it. 00:25:58.327 [2024-12-11 15:02:40.867674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.327 [2024-12-11 15:02:40.867700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0de0000b90 with addr=10.0.0.2, port=4420 00:25:58.327 qpair failed and we were unable to recover it. 00:25:58.327 [2024-12-11 15:02:40.867813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.327 [2024-12-11 15:02:40.867839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0de0000b90 with addr=10.0.0.2, port=4420 00:25:58.327 qpair failed and we were unable to recover it. 00:25:58.327 [2024-12-11 15:02:40.867927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.327 [2024-12-11 15:02:40.867952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0de0000b90 with addr=10.0.0.2, port=4420 00:25:58.327 qpair failed and we were unable to recover it. 00:25:58.327 [2024-12-11 15:02:40.868037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.327 [2024-12-11 15:02:40.868063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0de0000b90 with addr=10.0.0.2, port=4420 00:25:58.327 qpair failed and we were unable to recover it. 00:25:58.327 [2024-12-11 15:02:40.868140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.327 [2024-12-11 15:02:40.868164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0de0000b90 with addr=10.0.0.2, port=4420 00:25:58.327 qpair failed and we were unable to recover it. 00:25:58.327 [2024-12-11 15:02:40.868283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.327 [2024-12-11 15:02:40.868310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0de0000b90 with addr=10.0.0.2, port=4420 00:25:58.327 qpair failed and we were unable to recover it. 00:25:58.327 [2024-12-11 15:02:40.868423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.327 [2024-12-11 15:02:40.868455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.327 qpair failed and we were unable to recover it. 00:25:58.327 [2024-12-11 15:02:40.868614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.327 [2024-12-11 15:02:40.868641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.327 qpair failed and we were unable to recover it. 00:25:58.327 [2024-12-11 15:02:40.868776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.327 [2024-12-11 15:02:40.868828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.327 qpair failed and we were unable to recover it. 00:25:58.327 [2024-12-11 15:02:40.869018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.327 [2024-12-11 15:02:40.869054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.327 qpair failed and we were unable to recover it. 00:25:58.327 [2024-12-11 15:02:40.869175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.327 [2024-12-11 15:02:40.869201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.327 qpair failed and we were unable to recover it. 00:25:58.327 [2024-12-11 15:02:40.869309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.327 [2024-12-11 15:02:40.869335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.327 qpair failed and we were unable to recover it. 00:25:58.327 [2024-12-11 15:02:40.869475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.327 [2024-12-11 15:02:40.869500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.327 qpair failed and we were unable to recover it. 00:25:58.327 [2024-12-11 15:02:40.869637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.327 [2024-12-11 15:02:40.869665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.327 qpair failed and we were unable to recover it. 00:25:58.327 [2024-12-11 15:02:40.869784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.327 [2024-12-11 15:02:40.869819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.327 qpair failed and we were unable to recover it. 00:25:58.327 [2024-12-11 15:02:40.870055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.327 [2024-12-11 15:02:40.870082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.327 qpair failed and we were unable to recover it. 00:25:58.327 [2024-12-11 15:02:40.870215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.327 [2024-12-11 15:02:40.870241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.327 qpair failed and we were unable to recover it. 00:25:58.327 [2024-12-11 15:02:40.870341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.327 [2024-12-11 15:02:40.870366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.327 qpair failed and we were unable to recover it. 00:25:58.327 [2024-12-11 15:02:40.870462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.327 [2024-12-11 15:02:40.870490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0de0000b90 with addr=10.0.0.2, port=4420 00:25:58.327 qpair failed and we were unable to recover it. 00:25:58.327 [2024-12-11 15:02:40.870629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.327 [2024-12-11 15:02:40.870656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0de0000b90 with addr=10.0.0.2, port=4420 00:25:58.327 qpair failed and we were unable to recover it. 00:25:58.327 [2024-12-11 15:02:40.870810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.327 [2024-12-11 15:02:40.870838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0de0000b90 with addr=10.0.0.2, port=4420 00:25:58.327 qpair failed and we were unable to recover it. 00:25:58.327 [2024-12-11 15:02:40.871006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.327 [2024-12-11 15:02:40.871040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0de0000b90 with addr=10.0.0.2, port=4420 00:25:58.327 qpair failed and we were unable to recover it. 00:25:58.327 [2024-12-11 15:02:40.871137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.327 [2024-12-11 15:02:40.871162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0de0000b90 with addr=10.0.0.2, port=4420 00:25:58.327 qpair failed and we were unable to recover it. 00:25:58.327 [2024-12-11 15:02:40.871279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.327 [2024-12-11 15:02:40.871304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0de0000b90 with addr=10.0.0.2, port=4420 00:25:58.327 qpair failed and we were unable to recover it. 00:25:58.327 [2024-12-11 15:02:40.871396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.327 [2024-12-11 15:02:40.871423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.327 qpair failed and we were unable to recover it. 00:25:58.327 [2024-12-11 15:02:40.871507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.327 [2024-12-11 15:02:40.871535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.327 qpair failed and we were unable to recover it. 00:25:58.327 [2024-12-11 15:02:40.871680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.327 [2024-12-11 15:02:40.871714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.327 qpair failed and we were unable to recover it. 00:25:58.327 [2024-12-11 15:02:40.871933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.327 [2024-12-11 15:02:40.871978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.327 qpair failed and we were unable to recover it. 00:25:58.327 [2024-12-11 15:02:40.872111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.327 [2024-12-11 15:02:40.872138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.327 qpair failed and we were unable to recover it. 00:25:58.327 [2024-12-11 15:02:40.872281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.327 [2024-12-11 15:02:40.872307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.327 qpair failed and we were unable to recover it. 00:25:58.327 [2024-12-11 15:02:40.872401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.327 [2024-12-11 15:02:40.872426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.328 qpair failed and we were unable to recover it. 00:25:58.328 [2024-12-11 15:02:40.872513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.328 [2024-12-11 15:02:40.872539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.328 qpair failed and we were unable to recover it. 00:25:58.328 [2024-12-11 15:02:40.872637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.328 [2024-12-11 15:02:40.872663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.328 qpair failed and we were unable to recover it. 00:25:58.328 [2024-12-11 15:02:40.872808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.328 [2024-12-11 15:02:40.872834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.328 qpair failed and we were unable to recover it. 00:25:58.328 [2024-12-11 15:02:40.872970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.328 [2024-12-11 15:02:40.872996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.328 qpair failed and we were unable to recover it. 00:25:58.328 [2024-12-11 15:02:40.873116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.328 [2024-12-11 15:02:40.873142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.328 qpair failed and we were unable to recover it. 00:25:58.328 [2024-12-11 15:02:40.873257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.328 [2024-12-11 15:02:40.873284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.328 qpair failed and we were unable to recover it. 00:25:58.328 [2024-12-11 15:02:40.873375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.328 [2024-12-11 15:02:40.873402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.328 qpair failed and we were unable to recover it. 00:25:58.328 [2024-12-11 15:02:40.873519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.328 [2024-12-11 15:02:40.873551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.328 qpair failed and we were unable to recover it. 00:25:58.328 [2024-12-11 15:02:40.873664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.328 [2024-12-11 15:02:40.873691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.328 qpair failed and we were unable to recover it. 00:25:58.328 [2024-12-11 15:02:40.873842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.328 [2024-12-11 15:02:40.873870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.328 qpair failed and we were unable to recover it. 00:25:58.328 [2024-12-11 15:02:40.873975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.328 [2024-12-11 15:02:40.874000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.328 qpair failed and we were unable to recover it. 00:25:58.328 [2024-12-11 15:02:40.874086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.328 [2024-12-11 15:02:40.874111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.328 qpair failed and we were unable to recover it. 00:25:58.328 [2024-12-11 15:02:40.874218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.328 [2024-12-11 15:02:40.874244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.328 qpair failed and we were unable to recover it. 00:25:58.328 [2024-12-11 15:02:40.874339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.328 [2024-12-11 15:02:40.874365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.328 qpair failed and we were unable to recover it. 00:25:58.328 [2024-12-11 15:02:40.874451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.328 [2024-12-11 15:02:40.874484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0de0000b90 with addr=10.0.0.2, port=4420 00:25:58.328 qpair failed and we were unable to recover it. 00:25:58.328 [2024-12-11 15:02:40.874575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.328 [2024-12-11 15:02:40.874624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0de0000b90 with addr=10.0.0.2, port=4420 00:25:58.328 qpair failed and we were unable to recover it. 00:25:58.328 [2024-12-11 15:02:40.874729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.328 [2024-12-11 15:02:40.874754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0de0000b90 with addr=10.0.0.2, port=4420 00:25:58.328 qpair failed and we were unable to recover it. 00:25:58.328 [2024-12-11 15:02:40.874848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.328 [2024-12-11 15:02:40.874874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0de0000b90 with addr=10.0.0.2, port=4420 00:25:58.328 qpair failed and we were unable to recover it. 00:25:58.328 [2024-12-11 15:02:40.874984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.328 [2024-12-11 15:02:40.875010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0de0000b90 with addr=10.0.0.2, port=4420 00:25:58.328 qpair failed and we were unable to recover it. 00:25:58.328 [2024-12-11 15:02:40.875134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.328 [2024-12-11 15:02:40.875160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0de0000b90 with addr=10.0.0.2, port=4420 00:25:58.328 qpair failed and we were unable to recover it. 00:25:58.328 [2024-12-11 15:02:40.875305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.328 [2024-12-11 15:02:40.875332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.328 qpair failed and we were unable to recover it. 00:25:58.328 [2024-12-11 15:02:40.875411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.328 [2024-12-11 15:02:40.875436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.328 qpair failed and we were unable to recover it. 00:25:58.328 [2024-12-11 15:02:40.875553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.328 [2024-12-11 15:02:40.875597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.328 qpair failed and we were unable to recover it. 00:25:58.328 [2024-12-11 15:02:40.875716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.328 [2024-12-11 15:02:40.875750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.328 qpair failed and we were unable to recover it. 00:25:58.328 [2024-12-11 15:02:40.875891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.328 [2024-12-11 15:02:40.875917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.328 qpair failed and we were unable to recover it. 00:25:58.328 [2024-12-11 15:02:40.876015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.328 [2024-12-11 15:02:40.876041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.328 qpair failed and we were unable to recover it. 00:25:58.328 [2024-12-11 15:02:40.876155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.328 [2024-12-11 15:02:40.876181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.328 qpair failed and we were unable to recover it. 00:25:58.328 [2024-12-11 15:02:40.876275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.328 [2024-12-11 15:02:40.876300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.328 qpair failed and we were unable to recover it. 00:25:58.328 [2024-12-11 15:02:40.876420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.328 [2024-12-11 15:02:40.876446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.328 qpair failed and we were unable to recover it. 00:25:58.328 [2024-12-11 15:02:40.876589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.328 [2024-12-11 15:02:40.876619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0de0000b90 with addr=10.0.0.2, port=4420 00:25:58.328 qpair failed and we were unable to recover it. 00:25:58.328 [2024-12-11 15:02:40.876750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.328 [2024-12-11 15:02:40.876783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0de0000b90 with addr=10.0.0.2, port=4420 00:25:58.328 qpair failed and we were unable to recover it. 00:25:58.328 [2024-12-11 15:02:40.876951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.328 [2024-12-11 15:02:40.876989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.328 qpair failed and we were unable to recover it. 00:25:58.328 [2024-12-11 15:02:40.877129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.328 [2024-12-11 15:02:40.877157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.328 qpair failed and we were unable to recover it. 00:25:58.328 [2024-12-11 15:02:40.877239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.328 [2024-12-11 15:02:40.877265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.328 qpair failed and we were unable to recover it. 00:25:58.328 [2024-12-11 15:02:40.877357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.328 [2024-12-11 15:02:40.877384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.328 qpair failed and we were unable to recover it. 00:25:58.328 [2024-12-11 15:02:40.877502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.328 [2024-12-11 15:02:40.877530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.328 qpair failed and we were unable to recover it. 00:25:58.328 [2024-12-11 15:02:40.877657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.328 [2024-12-11 15:02:40.877683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.328 qpair failed and we were unable to recover it. 00:25:58.328 [2024-12-11 15:02:40.877771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.328 [2024-12-11 15:02:40.877797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.328 qpair failed and we were unable to recover it. 00:25:58.328 [2024-12-11 15:02:40.877941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.328 [2024-12-11 15:02:40.878003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.328 qpair failed and we were unable to recover it. 00:25:58.329 [2024-12-11 15:02:40.878101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.329 [2024-12-11 15:02:40.878129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.329 qpair failed and we were unable to recover it. 00:25:58.329 [2024-12-11 15:02:40.878255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.329 [2024-12-11 15:02:40.878282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.329 qpair failed and we were unable to recover it. 00:25:58.329 [2024-12-11 15:02:40.878437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.329 [2024-12-11 15:02:40.878463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.329 qpair failed and we were unable to recover it. 00:25:58.329 [2024-12-11 15:02:40.878585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.329 [2024-12-11 15:02:40.878611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.329 qpair failed and we were unable to recover it. 00:25:58.329 [2024-12-11 15:02:40.878693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.329 [2024-12-11 15:02:40.878720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.329 qpair failed and we were unable to recover it. 00:25:58.329 [2024-12-11 15:02:40.878873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.329 [2024-12-11 15:02:40.878902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.329 qpair failed and we were unable to recover it. 00:25:58.329 [2024-12-11 15:02:40.879013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.329 [2024-12-11 15:02:40.879041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.329 qpair failed and we were unable to recover it. 00:25:58.329 [2024-12-11 15:02:40.879189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.329 [2024-12-11 15:02:40.879217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.329 qpair failed and we were unable to recover it. 00:25:58.329 [2024-12-11 15:02:40.879328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.329 [2024-12-11 15:02:40.879374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.329 qpair failed and we were unable to recover it. 00:25:58.329 [2024-12-11 15:02:40.879504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.329 [2024-12-11 15:02:40.879531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.329 qpair failed and we were unable to recover it. 00:25:58.329 [2024-12-11 15:02:40.879711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.329 [2024-12-11 15:02:40.879754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.329 qpair failed and we were unable to recover it. 00:25:58.329 [2024-12-11 15:02:40.879858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.329 [2024-12-11 15:02:40.879886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.329 qpair failed and we were unable to recover it. 00:25:58.329 [2024-12-11 15:02:40.880039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.329 [2024-12-11 15:02:40.880083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.329 qpair failed and we were unable to recover it. 00:25:58.329 [2024-12-11 15:02:40.880214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.329 [2024-12-11 15:02:40.880257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.329 qpair failed and we were unable to recover it. 00:25:58.329 [2024-12-11 15:02:40.880370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.329 [2024-12-11 15:02:40.880396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.329 qpair failed and we were unable to recover it. 00:25:58.329 [2024-12-11 15:02:40.880482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.329 [2024-12-11 15:02:40.880508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.329 qpair failed and we were unable to recover it. 00:25:58.329 [2024-12-11 15:02:40.880646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.329 [2024-12-11 15:02:40.880677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.329 qpair failed and we were unable to recover it. 00:25:58.329 [2024-12-11 15:02:40.880764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.329 [2024-12-11 15:02:40.880791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.329 qpair failed and we were unable to recover it. 00:25:58.329 [2024-12-11 15:02:40.880901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.329 [2024-12-11 15:02:40.880927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.329 qpair failed and we were unable to recover it. 00:25:58.329 [2024-12-11 15:02:40.881044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.329 [2024-12-11 15:02:40.881070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.329 qpair failed and we were unable to recover it. 00:25:58.329 [2024-12-11 15:02:40.881180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.329 [2024-12-11 15:02:40.881206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.329 qpair failed and we were unable to recover it. 00:25:58.329 [2024-12-11 15:02:40.881321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.329 [2024-12-11 15:02:40.881347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.329 qpair failed and we were unable to recover it. 00:25:58.329 [2024-12-11 15:02:40.881486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.329 [2024-12-11 15:02:40.881512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.329 qpair failed and we were unable to recover it. 00:25:58.329 [2024-12-11 15:02:40.881639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.329 [2024-12-11 15:02:40.881668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.329 qpair failed and we were unable to recover it. 00:25:58.329 [2024-12-11 15:02:40.881803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.329 [2024-12-11 15:02:40.881831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.329 qpair failed and we were unable to recover it. 00:25:58.329 [2024-12-11 15:02:40.881927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.329 [2024-12-11 15:02:40.881953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.329 qpair failed and we were unable to recover it. 00:25:58.329 [2024-12-11 15:02:40.882087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.329 [2024-12-11 15:02:40.882131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.329 qpair failed and we were unable to recover it. 00:25:58.329 [2024-12-11 15:02:40.882264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.329 [2024-12-11 15:02:40.882293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.329 qpair failed and we were unable to recover it. 00:25:58.329 [2024-12-11 15:02:40.882392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.329 [2024-12-11 15:02:40.882418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.329 qpair failed and we were unable to recover it. 00:25:58.329 [2024-12-11 15:02:40.882584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.329 [2024-12-11 15:02:40.882621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.329 qpair failed and we were unable to recover it. 00:25:58.329 [2024-12-11 15:02:40.882773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.329 [2024-12-11 15:02:40.882829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.329 qpair failed and we were unable to recover it. 00:25:58.329 [2024-12-11 15:02:40.883008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.329 [2024-12-11 15:02:40.883056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.329 qpair failed and we were unable to recover it. 00:25:58.329 [2024-12-11 15:02:40.883182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.329 [2024-12-11 15:02:40.883208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.329 qpair failed and we were unable to recover it. 00:25:58.329 [2024-12-11 15:02:40.883319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.329 [2024-12-11 15:02:40.883345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.329 qpair failed and we were unable to recover it. 00:25:58.329 [2024-12-11 15:02:40.883437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.329 [2024-12-11 15:02:40.883463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.329 qpair failed and we were unable to recover it. 00:25:58.329 [2024-12-11 15:02:40.883613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.329 [2024-12-11 15:02:40.883640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.329 qpair failed and we were unable to recover it. 00:25:58.329 [2024-12-11 15:02:40.883759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.329 [2024-12-11 15:02:40.883786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.329 qpair failed and we were unable to recover it. 00:25:58.329 [2024-12-11 15:02:40.883872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.329 [2024-12-11 15:02:40.883899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.329 qpair failed and we were unable to recover it. 00:25:58.329 [2024-12-11 15:02:40.884005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.329 [2024-12-11 15:02:40.884031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.329 qpair failed and we were unable to recover it. 00:25:58.330 [2024-12-11 15:02:40.884110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.330 [2024-12-11 15:02:40.884137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.330 qpair failed and we were unable to recover it. 00:25:58.330 [2024-12-11 15:02:40.884218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.330 [2024-12-11 15:02:40.884244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.330 qpair failed and we were unable to recover it. 00:25:58.330 [2024-12-11 15:02:40.884353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.330 [2024-12-11 15:02:40.884380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.330 qpair failed and we were unable to recover it. 00:25:58.330 [2024-12-11 15:02:40.884461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.330 [2024-12-11 15:02:40.884488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.330 qpair failed and we were unable to recover it. 00:25:58.330 [2024-12-11 15:02:40.884611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.330 [2024-12-11 15:02:40.884638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.330 qpair failed and we were unable to recover it. 00:25:58.330 [2024-12-11 15:02:40.884760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.330 [2024-12-11 15:02:40.884786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.330 qpair failed and we were unable to recover it. 00:25:58.330 [2024-12-11 15:02:40.884896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.330 [2024-12-11 15:02:40.884922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.330 qpair failed and we were unable to recover it. 00:25:58.330 [2024-12-11 15:02:40.885012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.330 [2024-12-11 15:02:40.885039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.330 qpair failed and we were unable to recover it. 00:25:58.330 [2024-12-11 15:02:40.885132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.330 [2024-12-11 15:02:40.885158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.330 qpair failed and we were unable to recover it. 00:25:58.330 [2024-12-11 15:02:40.885289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.330 [2024-12-11 15:02:40.885329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.330 qpair failed and we were unable to recover it. 00:25:58.330 [2024-12-11 15:02:40.885447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.330 [2024-12-11 15:02:40.885475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.330 qpair failed and we were unable to recover it. 00:25:58.330 [2024-12-11 15:02:40.885576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.330 [2024-12-11 15:02:40.885609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.330 qpair failed and we were unable to recover it. 00:25:58.330 [2024-12-11 15:02:40.885738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.330 [2024-12-11 15:02:40.885766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.330 qpair failed and we were unable to recover it. 00:25:58.330 [2024-12-11 15:02:40.885887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.330 [2024-12-11 15:02:40.885921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.330 qpair failed and we were unable to recover it. 00:25:58.330 [2024-12-11 15:02:40.886064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.330 [2024-12-11 15:02:40.886110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.330 qpair failed and we were unable to recover it. 00:25:58.330 [2024-12-11 15:02:40.886243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.330 [2024-12-11 15:02:40.886270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.330 qpair failed and we were unable to recover it. 00:25:58.330 [2024-12-11 15:02:40.886360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.330 [2024-12-11 15:02:40.886387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.330 qpair failed and we were unable to recover it. 00:25:58.330 [2024-12-11 15:02:40.886499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.330 [2024-12-11 15:02:40.886530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.330 qpair failed and we were unable to recover it. 00:25:58.330 [2024-12-11 15:02:40.886634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.330 [2024-12-11 15:02:40.886662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.330 qpair failed and we were unable to recover it. 00:25:58.330 [2024-12-11 15:02:40.886808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.330 [2024-12-11 15:02:40.886837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.330 qpair failed and we were unable to recover it. 00:25:58.330 [2024-12-11 15:02:40.886989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.330 [2024-12-11 15:02:40.887036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.330 qpair failed and we were unable to recover it. 00:25:58.330 [2024-12-11 15:02:40.887181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.330 [2024-12-11 15:02:40.887229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.330 qpair failed and we were unable to recover it. 00:25:58.330 [2024-12-11 15:02:40.887345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.330 [2024-12-11 15:02:40.887372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.330 qpair failed and we were unable to recover it. 00:25:58.330 [2024-12-11 15:02:40.887463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.330 [2024-12-11 15:02:40.887489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.330 qpair failed and we were unable to recover it. 00:25:58.330 [2024-12-11 15:02:40.887608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.330 [2024-12-11 15:02:40.887635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.330 qpair failed and we were unable to recover it. 00:25:58.330 [2024-12-11 15:02:40.887753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.330 [2024-12-11 15:02:40.887779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.330 qpair failed and we were unable to recover it. 00:25:58.330 [2024-12-11 15:02:40.887868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.330 [2024-12-11 15:02:40.887894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.330 qpair failed and we were unable to recover it. 00:25:58.330 [2024-12-11 15:02:40.888006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.330 [2024-12-11 15:02:40.888031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.330 qpair failed and we were unable to recover it. 00:25:58.330 [2024-12-11 15:02:40.888120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.330 [2024-12-11 15:02:40.888146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.330 qpair failed and we were unable to recover it. 00:25:58.330 [2024-12-11 15:02:40.888227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.330 [2024-12-11 15:02:40.888253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.330 qpair failed and we were unable to recover it. 00:25:58.330 [2024-12-11 15:02:40.888362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.330 [2024-12-11 15:02:40.888388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.330 qpair failed and we were unable to recover it. 00:25:58.330 [2024-12-11 15:02:40.888509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.330 [2024-12-11 15:02:40.888535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.330 qpair failed and we were unable to recover it. 00:25:58.330 [2024-12-11 15:02:40.888663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.330 [2024-12-11 15:02:40.888690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.330 qpair failed and we were unable to recover it. 00:25:58.330 [2024-12-11 15:02:40.888773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.330 [2024-12-11 15:02:40.888798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.330 qpair failed and we were unable to recover it. 00:25:58.330 [2024-12-11 15:02:40.888876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.330 [2024-12-11 15:02:40.888903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.330 qpair failed and we were unable to recover it. 00:25:58.331 [2024-12-11 15:02:40.889041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.331 [2024-12-11 15:02:40.889067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.331 qpair failed and we were unable to recover it. 00:25:58.331 [2024-12-11 15:02:40.889215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.331 [2024-12-11 15:02:40.889241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.331 qpair failed and we were unable to recover it. 00:25:58.331 [2024-12-11 15:02:40.889378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.331 [2024-12-11 15:02:40.889404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.331 qpair failed and we were unable to recover it. 00:25:58.331 [2024-12-11 15:02:40.889484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.331 [2024-12-11 15:02:40.889510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.331 qpair failed and we were unable to recover it. 00:25:58.331 [2024-12-11 15:02:40.889621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.331 [2024-12-11 15:02:40.889650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.331 qpair failed and we were unable to recover it. 00:25:58.331 [2024-12-11 15:02:40.889768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.331 [2024-12-11 15:02:40.889796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.331 qpair failed and we were unable to recover it. 00:25:58.331 [2024-12-11 15:02:40.889894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.331 [2024-12-11 15:02:40.889920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.331 qpair failed and we were unable to recover it. 00:25:58.331 [2024-12-11 15:02:40.890018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.331 [2024-12-11 15:02:40.890044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.331 qpair failed and we were unable to recover it. 00:25:58.331 [2024-12-11 15:02:40.890160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.331 [2024-12-11 15:02:40.890186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.331 qpair failed and we were unable to recover it. 00:25:58.331 [2024-12-11 15:02:40.890294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.331 [2024-12-11 15:02:40.890334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.331 qpair failed and we were unable to recover it. 00:25:58.331 [2024-12-11 15:02:40.890429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.331 [2024-12-11 15:02:40.890457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.331 qpair failed and we were unable to recover it. 00:25:58.331 [2024-12-11 15:02:40.890553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.331 [2024-12-11 15:02:40.890580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.331 qpair failed and we were unable to recover it. 00:25:58.331 [2024-12-11 15:02:40.890693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.331 [2024-12-11 15:02:40.890720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.331 qpair failed and we were unable to recover it. 00:25:58.331 [2024-12-11 15:02:40.890840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.331 [2024-12-11 15:02:40.890866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.331 qpair failed and we were unable to recover it. 00:25:58.331 [2024-12-11 15:02:40.890946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.331 [2024-12-11 15:02:40.890972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.331 qpair failed and we were unable to recover it. 00:25:58.331 [2024-12-11 15:02:40.891081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.331 [2024-12-11 15:02:40.891106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.331 qpair failed and we were unable to recover it. 00:25:58.331 [2024-12-11 15:02:40.891224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.331 [2024-12-11 15:02:40.891251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.331 qpair failed and we were unable to recover it. 00:25:58.331 [2024-12-11 15:02:40.891346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.331 [2024-12-11 15:02:40.891374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.331 qpair failed and we were unable to recover it. 00:25:58.331 [2024-12-11 15:02:40.891492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.331 [2024-12-11 15:02:40.891518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.331 qpair failed and we were unable to recover it. 00:25:58.331 [2024-12-11 15:02:40.891643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.331 [2024-12-11 15:02:40.891670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.331 qpair failed and we were unable to recover it. 00:25:58.331 [2024-12-11 15:02:40.891774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.331 [2024-12-11 15:02:40.891802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.331 qpair failed and we were unable to recover it. 00:25:58.331 [2024-12-11 15:02:40.891949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.331 [2024-12-11 15:02:40.891992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.331 qpair failed and we were unable to recover it. 00:25:58.331 [2024-12-11 15:02:40.892119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.331 [2024-12-11 15:02:40.892167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.331 qpair failed and we were unable to recover it. 00:25:58.331 [2024-12-11 15:02:40.892290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.331 [2024-12-11 15:02:40.892317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.331 qpair failed and we were unable to recover it. 00:25:58.331 [2024-12-11 15:02:40.892410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.331 [2024-12-11 15:02:40.892438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.331 qpair failed and we were unable to recover it. 00:25:58.331 [2024-12-11 15:02:40.892529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.331 [2024-12-11 15:02:40.892561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.331 qpair failed and we were unable to recover it. 00:25:58.331 [2024-12-11 15:02:40.892683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.331 [2024-12-11 15:02:40.892710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.331 qpair failed and we were unable to recover it. 00:25:58.331 [2024-12-11 15:02:40.892791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.331 [2024-12-11 15:02:40.892817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.331 qpair failed and we were unable to recover it. 00:25:58.331 [2024-12-11 15:02:40.892905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.331 [2024-12-11 15:02:40.892931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.331 qpair failed and we were unable to recover it. 00:25:58.331 [2024-12-11 15:02:40.893045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.331 [2024-12-11 15:02:40.893071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.331 qpair failed and we were unable to recover it. 00:25:58.331 [2024-12-11 15:02:40.893188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.331 [2024-12-11 15:02:40.893215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.331 qpair failed and we were unable to recover it. 00:25:58.331 [2024-12-11 15:02:40.893334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.331 [2024-12-11 15:02:40.893360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.331 qpair failed and we were unable to recover it. 00:25:58.331 [2024-12-11 15:02:40.893497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.331 [2024-12-11 15:02:40.893523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.331 qpair failed and we were unable to recover it. 00:25:58.331 [2024-12-11 15:02:40.893653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.331 [2024-12-11 15:02:40.893680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.331 qpair failed and we were unable to recover it. 00:25:58.331 [2024-12-11 15:02:40.893822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.331 [2024-12-11 15:02:40.893848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.331 qpair failed and we were unable to recover it. 00:25:58.331 [2024-12-11 15:02:40.893967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.331 [2024-12-11 15:02:40.893993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.331 qpair failed and we were unable to recover it. 00:25:58.331 [2024-12-11 15:02:40.894093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.331 [2024-12-11 15:02:40.894121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.331 qpair failed and we were unable to recover it. 00:25:58.331 [2024-12-11 15:02:40.894208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.331 [2024-12-11 15:02:40.894236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.331 qpair failed and we were unable to recover it. 00:25:58.331 [2024-12-11 15:02:40.894352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.331 [2024-12-11 15:02:40.894379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.331 qpair failed and we were unable to recover it. 00:25:58.331 [2024-12-11 15:02:40.894573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.332 [2024-12-11 15:02:40.894600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.332 qpair failed and we were unable to recover it. 00:25:58.332 [2024-12-11 15:02:40.894685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.332 [2024-12-11 15:02:40.894712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.332 qpair failed and we were unable to recover it. 00:25:58.332 [2024-12-11 15:02:40.894823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.332 [2024-12-11 15:02:40.894850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.332 qpair failed and we were unable to recover it. 00:25:58.332 [2024-12-11 15:02:40.894986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.332 [2024-12-11 15:02:40.895031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.332 qpair failed and we were unable to recover it. 00:25:58.332 [2024-12-11 15:02:40.895130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.332 [2024-12-11 15:02:40.895157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.332 qpair failed and we were unable to recover it. 00:25:58.332 [2024-12-11 15:02:40.895256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.332 [2024-12-11 15:02:40.895282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.332 qpair failed and we were unable to recover it. 00:25:58.332 [2024-12-11 15:02:40.895404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.332 [2024-12-11 15:02:40.895430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.332 qpair failed and we were unable to recover it. 00:25:58.332 [2024-12-11 15:02:40.895514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.332 [2024-12-11 15:02:40.895540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.332 qpair failed and we were unable to recover it. 00:25:58.332 [2024-12-11 15:02:40.895677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.332 [2024-12-11 15:02:40.895705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.332 qpair failed and we were unable to recover it. 00:25:58.332 [2024-12-11 15:02:40.895806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.332 [2024-12-11 15:02:40.895832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.332 qpair failed and we were unable to recover it. 00:25:58.332 [2024-12-11 15:02:40.895986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.332 [2024-12-11 15:02:40.896018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.332 qpair failed and we were unable to recover it. 00:25:58.332 [2024-12-11 15:02:40.896153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.332 [2024-12-11 15:02:40.896191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.332 qpair failed and we were unable to recover it. 00:25:58.332 [2024-12-11 15:02:40.896345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.332 [2024-12-11 15:02:40.896372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.332 qpair failed and we were unable to recover it. 00:25:58.332 [2024-12-11 15:02:40.896489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.332 [2024-12-11 15:02:40.896515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.332 qpair failed and we were unable to recover it. 00:25:58.332 [2024-12-11 15:02:40.896610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.332 [2024-12-11 15:02:40.896638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.332 qpair failed and we were unable to recover it. 00:25:58.332 [2024-12-11 15:02:40.896725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.332 [2024-12-11 15:02:40.896752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.332 qpair failed and we were unable to recover it. 00:25:58.332 [2024-12-11 15:02:40.896894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.332 [2024-12-11 15:02:40.896941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.332 qpair failed and we were unable to recover it. 00:25:58.332 [2024-12-11 15:02:40.897130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.332 [2024-12-11 15:02:40.897177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.332 qpair failed and we were unable to recover it. 00:25:58.332 [2024-12-11 15:02:40.897348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.332 [2024-12-11 15:02:40.897395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.332 qpair failed and we were unable to recover it. 00:25:58.332 [2024-12-11 15:02:40.897512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.332 [2024-12-11 15:02:40.897537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.332 qpair failed and we were unable to recover it. 00:25:58.332 [2024-12-11 15:02:40.897650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.332 [2024-12-11 15:02:40.897678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.332 qpair failed and we were unable to recover it. 00:25:58.332 [2024-12-11 15:02:40.897818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.332 [2024-12-11 15:02:40.897864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.332 qpair failed and we were unable to recover it. 00:25:58.332 [2024-12-11 15:02:40.897978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.332 [2024-12-11 15:02:40.898005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.332 qpair failed and we were unable to recover it. 00:25:58.332 [2024-12-11 15:02:40.898093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.332 [2024-12-11 15:02:40.898124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.332 qpair failed and we were unable to recover it. 00:25:58.332 [2024-12-11 15:02:40.898243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.332 [2024-12-11 15:02:40.898269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.332 qpair failed and we were unable to recover it. 00:25:58.332 [2024-12-11 15:02:40.898399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.332 [2024-12-11 15:02:40.898439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.332 qpair failed and we were unable to recover it. 00:25:58.332 [2024-12-11 15:02:40.898641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.332 [2024-12-11 15:02:40.898669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.332 qpair failed and we were unable to recover it. 00:25:58.332 [2024-12-11 15:02:40.898812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.332 [2024-12-11 15:02:40.898840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.332 qpair failed and we were unable to recover it. 00:25:58.332 [2024-12-11 15:02:40.899021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.332 [2024-12-11 15:02:40.899068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.332 qpair failed and we were unable to recover it. 00:25:58.332 [2024-12-11 15:02:40.899179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.332 [2024-12-11 15:02:40.899228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.332 qpair failed and we were unable to recover it. 00:25:58.332 [2024-12-11 15:02:40.899326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.332 [2024-12-11 15:02:40.899354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.332 qpair failed and we were unable to recover it. 00:25:58.332 [2024-12-11 15:02:40.899463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.332 [2024-12-11 15:02:40.899490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.332 qpair failed and we were unable to recover it. 00:25:58.332 [2024-12-11 15:02:40.899614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.332 [2024-12-11 15:02:40.899641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.332 qpair failed and we were unable to recover it. 00:25:58.332 [2024-12-11 15:02:40.899730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.332 [2024-12-11 15:02:40.899757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.332 qpair failed and we were unable to recover it. 00:25:58.332 [2024-12-11 15:02:40.899877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.332 [2024-12-11 15:02:40.899905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.332 qpair failed and we were unable to recover it. 00:25:58.332 [2024-12-11 15:02:40.900020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.332 [2024-12-11 15:02:40.900048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.332 qpair failed and we were unable to recover it. 00:25:58.332 [2024-12-11 15:02:40.900133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.332 [2024-12-11 15:02:40.900161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.332 qpair failed and we were unable to recover it. 00:25:58.332 [2024-12-11 15:02:40.900310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.332 [2024-12-11 15:02:40.900340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.332 qpair failed and we were unable to recover it. 00:25:58.332 [2024-12-11 15:02:40.900463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.332 [2024-12-11 15:02:40.900489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.332 qpair failed and we were unable to recover it. 00:25:58.332 [2024-12-11 15:02:40.900610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.333 [2024-12-11 15:02:40.900643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.333 qpair failed and we were unable to recover it. 00:25:58.333 [2024-12-11 15:02:40.900756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.333 [2024-12-11 15:02:40.900786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.333 qpair failed and we were unable to recover it. 00:25:58.333 [2024-12-11 15:02:40.900970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.333 [2024-12-11 15:02:40.901015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.333 qpair failed and we were unable to recover it. 00:25:58.333 [2024-12-11 15:02:40.901189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.333 [2024-12-11 15:02:40.901234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.333 qpair failed and we were unable to recover it. 00:25:58.333 [2024-12-11 15:02:40.901426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.333 [2024-12-11 15:02:40.901456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.333 qpair failed and we were unable to recover it. 00:25:58.333 [2024-12-11 15:02:40.901601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.333 [2024-12-11 15:02:40.901628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.333 qpair failed and we were unable to recover it. 00:25:58.333 [2024-12-11 15:02:40.901717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.333 [2024-12-11 15:02:40.901744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.333 qpair failed and we were unable to recover it. 00:25:58.333 [2024-12-11 15:02:40.901858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.333 [2024-12-11 15:02:40.901886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.333 qpair failed and we were unable to recover it. 00:25:58.333 [2024-12-11 15:02:40.902019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.333 [2024-12-11 15:02:40.902048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.333 qpair failed and we were unable to recover it. 00:25:58.333 [2024-12-11 15:02:40.902148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.333 [2024-12-11 15:02:40.902175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.333 qpair failed and we were unable to recover it. 00:25:58.333 [2024-12-11 15:02:40.902313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.333 [2024-12-11 15:02:40.902341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.333 qpair failed and we were unable to recover it. 00:25:58.333 [2024-12-11 15:02:40.902457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.333 [2024-12-11 15:02:40.902491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.333 qpair failed and we were unable to recover it. 00:25:58.333 [2024-12-11 15:02:40.902603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.333 [2024-12-11 15:02:40.902630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.333 qpair failed and we were unable to recover it. 00:25:58.333 [2024-12-11 15:02:40.902823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.333 [2024-12-11 15:02:40.902849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.333 qpair failed and we were unable to recover it. 00:25:58.333 [2024-12-11 15:02:40.902981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.333 [2024-12-11 15:02:40.903009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.333 qpair failed and we were unable to recover it. 00:25:58.333 [2024-12-11 15:02:40.903111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.333 [2024-12-11 15:02:40.903140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.333 qpair failed and we were unable to recover it. 00:25:58.333 [2024-12-11 15:02:40.903288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.333 [2024-12-11 15:02:40.903316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.333 qpair failed and we were unable to recover it. 00:25:58.333 [2024-12-11 15:02:40.903409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.333 [2024-12-11 15:02:40.903438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.333 qpair failed and we were unable to recover it. 00:25:58.333 [2024-12-11 15:02:40.903540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.333 [2024-12-11 15:02:40.903576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.333 qpair failed and we were unable to recover it. 00:25:58.333 [2024-12-11 15:02:40.903731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.333 [2024-12-11 15:02:40.903758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.333 qpair failed and we were unable to recover it. 00:25:58.333 [2024-12-11 15:02:40.903872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.333 [2024-12-11 15:02:40.903917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.333 qpair failed and we were unable to recover it. 00:25:58.333 [2024-12-11 15:02:40.904022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.333 [2024-12-11 15:02:40.904049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.333 qpair failed and we were unable to recover it. 00:25:58.333 [2024-12-11 15:02:40.904175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.333 [2024-12-11 15:02:40.904218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.333 qpair failed and we were unable to recover it. 00:25:58.333 [2024-12-11 15:02:40.904326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.333 [2024-12-11 15:02:40.904353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.333 qpair failed and we were unable to recover it. 00:25:58.333 [2024-12-11 15:02:40.904485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.333 [2024-12-11 15:02:40.904511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.333 qpair failed and we were unable to recover it. 00:25:58.333 [2024-12-11 15:02:40.904619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.333 [2024-12-11 15:02:40.904657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.333 qpair failed and we were unable to recover it. 00:25:58.333 [2024-12-11 15:02:40.904748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.333 [2024-12-11 15:02:40.904775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.333 qpair failed and we were unable to recover it. 00:25:58.333 [2024-12-11 15:02:40.904854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.333 [2024-12-11 15:02:40.904880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.333 qpair failed and we were unable to recover it. 00:25:58.333 [2024-12-11 15:02:40.904989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.333 [2024-12-11 15:02:40.905014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.333 qpair failed and we were unable to recover it. 00:25:58.333 [2024-12-11 15:02:40.905165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.333 [2024-12-11 15:02:40.905203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.333 qpair failed and we were unable to recover it. 00:25:58.333 [2024-12-11 15:02:40.905375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.333 [2024-12-11 15:02:40.905413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.333 qpair failed and we were unable to recover it. 00:25:58.333 [2024-12-11 15:02:40.905580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.333 [2024-12-11 15:02:40.905624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.333 qpair failed and we were unable to recover it. 00:25:58.333 [2024-12-11 15:02:40.905715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.333 [2024-12-11 15:02:40.905741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.333 qpair failed and we were unable to recover it. 00:25:58.333 [2024-12-11 15:02:40.905857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.333 [2024-12-11 15:02:40.905883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.333 qpair failed and we were unable to recover it. 00:25:58.333 [2024-12-11 15:02:40.905970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.333 [2024-12-11 15:02:40.905998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.333 qpair failed and we were unable to recover it. 00:25:58.333 [2024-12-11 15:02:40.906132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.333 [2024-12-11 15:02:40.906160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.333 qpair failed and we were unable to recover it. 00:25:58.333 [2024-12-11 15:02:40.906263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.333 [2024-12-11 15:02:40.906291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.333 qpair failed and we were unable to recover it. 00:25:58.333 [2024-12-11 15:02:40.906439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.333 [2024-12-11 15:02:40.906466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.333 qpair failed and we were unable to recover it. 00:25:58.333 [2024-12-11 15:02:40.906608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.333 [2024-12-11 15:02:40.906635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.333 qpair failed and we were unable to recover it. 00:25:58.333 [2024-12-11 15:02:40.906750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.333 [2024-12-11 15:02:40.906775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.334 qpair failed and we were unable to recover it. 00:25:58.334 [2024-12-11 15:02:40.906918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.334 [2024-12-11 15:02:40.906962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.334 qpair failed and we were unable to recover it. 00:25:58.334 [2024-12-11 15:02:40.907106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.334 [2024-12-11 15:02:40.907133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.334 qpair failed and we were unable to recover it. 00:25:58.334 [2024-12-11 15:02:40.907322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.334 [2024-12-11 15:02:40.907351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.334 qpair failed and we were unable to recover it. 00:25:58.334 [2024-12-11 15:02:40.907498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.334 [2024-12-11 15:02:40.907525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.334 qpair failed and we were unable to recover it. 00:25:58.334 [2024-12-11 15:02:40.907632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.334 [2024-12-11 15:02:40.907659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.334 qpair failed and we were unable to recover it. 00:25:58.334 [2024-12-11 15:02:40.907789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.334 [2024-12-11 15:02:40.907816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.334 qpair failed and we were unable to recover it. 00:25:58.334 [2024-12-11 15:02:40.907900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.334 [2024-12-11 15:02:40.907941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.334 qpair failed and we were unable to recover it. 00:25:58.334 [2024-12-11 15:02:40.908085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.334 [2024-12-11 15:02:40.908131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.334 qpair failed and we were unable to recover it. 00:25:58.334 [2024-12-11 15:02:40.908218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.334 [2024-12-11 15:02:40.908246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.334 qpair failed and we were unable to recover it. 00:25:58.334 [2024-12-11 15:02:40.908334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.334 [2024-12-11 15:02:40.908361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.334 qpair failed and we were unable to recover it. 00:25:58.334 [2024-12-11 15:02:40.908451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.334 [2024-12-11 15:02:40.908478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.334 qpair failed and we were unable to recover it. 00:25:58.334 [2024-12-11 15:02:40.908607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.334 [2024-12-11 15:02:40.908652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.334 qpair failed and we were unable to recover it. 00:25:58.334 [2024-12-11 15:02:40.908774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.334 [2024-12-11 15:02:40.908802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.334 qpair failed and we were unable to recover it. 00:25:58.334 [2024-12-11 15:02:40.908907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.334 [2024-12-11 15:02:40.908935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.334 qpair failed and we were unable to recover it. 00:25:58.334 [2024-12-11 15:02:40.909046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.334 [2024-12-11 15:02:40.909074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.334 qpair failed and we were unable to recover it. 00:25:58.334 [2024-12-11 15:02:40.909176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.334 [2024-12-11 15:02:40.909201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.334 qpair failed and we were unable to recover it. 00:25:58.334 [2024-12-11 15:02:40.909315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.334 [2024-12-11 15:02:40.909341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.334 qpair failed and we were unable to recover it. 00:25:58.334 [2024-12-11 15:02:40.909459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.334 [2024-12-11 15:02:40.909487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.334 qpair failed and we were unable to recover it. 00:25:58.334 [2024-12-11 15:02:40.909610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.334 [2024-12-11 15:02:40.909643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.334 qpair failed and we were unable to recover it. 00:25:58.334 [2024-12-11 15:02:40.909762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.334 [2024-12-11 15:02:40.909790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.334 qpair failed and we were unable to recover it. 00:25:58.334 [2024-12-11 15:02:40.909911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.334 [2024-12-11 15:02:40.909937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.334 qpair failed and we were unable to recover it. 00:25:58.334 [2024-12-11 15:02:40.910075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.334 [2024-12-11 15:02:40.910101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.334 qpair failed and we were unable to recover it. 00:25:58.334 [2024-12-11 15:02:40.910214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.334 [2024-12-11 15:02:40.910239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.334 qpair failed and we were unable to recover it. 00:25:58.334 [2024-12-11 15:02:40.910428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.334 [2024-12-11 15:02:40.910458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.334 qpair failed and we were unable to recover it. 00:25:58.334 [2024-12-11 15:02:40.910587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.334 [2024-12-11 15:02:40.910615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.334 qpair failed and we were unable to recover it. 00:25:58.334 [2024-12-11 15:02:40.910732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.334 [2024-12-11 15:02:40.910759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.334 qpair failed and we were unable to recover it. 00:25:58.334 [2024-12-11 15:02:40.910939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.334 [2024-12-11 15:02:40.910986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.334 qpair failed and we were unable to recover it. 00:25:58.334 [2024-12-11 15:02:40.911132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.334 [2024-12-11 15:02:40.911179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.334 qpair failed and we were unable to recover it. 00:25:58.334 [2024-12-11 15:02:40.911303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.334 [2024-12-11 15:02:40.911332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.334 qpair failed and we were unable to recover it. 00:25:58.334 [2024-12-11 15:02:40.911494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.334 [2024-12-11 15:02:40.911522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.334 qpair failed and we were unable to recover it. 00:25:58.334 [2024-12-11 15:02:40.911641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.334 [2024-12-11 15:02:40.911669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.334 qpair failed and we were unable to recover it. 00:25:58.334 [2024-12-11 15:02:40.911813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.334 [2024-12-11 15:02:40.911840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.334 qpair failed and we were unable to recover it. 00:25:58.334 [2024-12-11 15:02:40.911965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.334 [2024-12-11 15:02:40.911994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.334 qpair failed and we were unable to recover it. 00:25:58.334 [2024-12-11 15:02:40.912176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.334 [2024-12-11 15:02:40.912205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.334 qpair failed and we were unable to recover it. 00:25:58.334 [2024-12-11 15:02:40.912326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.334 [2024-12-11 15:02:40.912356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.334 qpair failed and we were unable to recover it. 00:25:58.334 [2024-12-11 15:02:40.912480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.334 [2024-12-11 15:02:40.912508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.334 qpair failed and we were unable to recover it. 00:25:58.334 [2024-12-11 15:02:40.912666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.334 [2024-12-11 15:02:40.912693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.334 qpair failed and we were unable to recover it. 00:25:58.334 [2024-12-11 15:02:40.912835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.334 [2024-12-11 15:02:40.912860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.334 qpair failed and we were unable to recover it. 00:25:58.334 [2024-12-11 15:02:40.912977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.335 [2024-12-11 15:02:40.913006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.335 qpair failed and we were unable to recover it. 00:25:58.335 [2024-12-11 15:02:40.913120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.335 [2024-12-11 15:02:40.913148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.335 qpair failed and we were unable to recover it. 00:25:58.335 [2024-12-11 15:02:40.913227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.335 [2024-12-11 15:02:40.913255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.335 qpair failed and we were unable to recover it. 00:25:58.335 [2024-12-11 15:02:40.913378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.335 [2024-12-11 15:02:40.913406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.335 qpair failed and we were unable to recover it. 00:25:58.335 [2024-12-11 15:02:40.913539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.335 [2024-12-11 15:02:40.913573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.335 qpair failed and we were unable to recover it. 00:25:58.335 [2024-12-11 15:02:40.913669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.335 [2024-12-11 15:02:40.913695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.335 qpair failed and we were unable to recover it. 00:25:58.335 [2024-12-11 15:02:40.913807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.335 [2024-12-11 15:02:40.913834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.335 qpair failed and we were unable to recover it. 00:25:58.335 [2024-12-11 15:02:40.913973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.335 [2024-12-11 15:02:40.914001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.335 qpair failed and we were unable to recover it. 00:25:58.335 [2024-12-11 15:02:40.914120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.335 [2024-12-11 15:02:40.914148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.335 qpair failed and we were unable to recover it. 00:25:58.335 [2024-12-11 15:02:40.914264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.335 [2024-12-11 15:02:40.914292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.335 qpair failed and we were unable to recover it. 00:25:58.335 [2024-12-11 15:02:40.914497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.335 [2024-12-11 15:02:40.914524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.335 qpair failed and we were unable to recover it. 00:25:58.335 [2024-12-11 15:02:40.914660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.335 [2024-12-11 15:02:40.914686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.335 qpair failed and we were unable to recover it. 00:25:58.335 [2024-12-11 15:02:40.914799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.335 [2024-12-11 15:02:40.914841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.335 qpair failed and we were unable to recover it. 00:25:58.335 [2024-12-11 15:02:40.914973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.335 [2024-12-11 15:02:40.915004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.335 qpair failed and we were unable to recover it. 00:25:58.335 [2024-12-11 15:02:40.915110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.335 [2024-12-11 15:02:40.915136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.335 qpair failed and we were unable to recover it. 00:25:58.335 [2024-12-11 15:02:40.915293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.335 [2024-12-11 15:02:40.915321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.335 qpair failed and we were unable to recover it. 00:25:58.335 [2024-12-11 15:02:40.915444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.335 [2024-12-11 15:02:40.915472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.335 qpair failed and we were unable to recover it. 00:25:58.335 [2024-12-11 15:02:40.915605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.335 [2024-12-11 15:02:40.915632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.335 qpair failed and we were unable to recover it. 00:25:58.335 [2024-12-11 15:02:40.915716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.335 [2024-12-11 15:02:40.915743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.335 qpair failed and we were unable to recover it. 00:25:58.335 [2024-12-11 15:02:40.915838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.335 [2024-12-11 15:02:40.915864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.335 qpair failed and we were unable to recover it. 00:25:58.335 [2024-12-11 15:02:40.915979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.335 [2024-12-11 15:02:40.916006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.335 qpair failed and we were unable to recover it. 00:25:58.335 [2024-12-11 15:02:40.916165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.335 [2024-12-11 15:02:40.916193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.335 qpair failed and we were unable to recover it. 00:25:58.335 [2024-12-11 15:02:40.916298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.335 [2024-12-11 15:02:40.916342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.335 qpair failed and we were unable to recover it. 00:25:58.335 [2024-12-11 15:02:40.916466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.335 [2024-12-11 15:02:40.916493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.335 qpair failed and we were unable to recover it. 00:25:58.335 [2024-12-11 15:02:40.916647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.335 [2024-12-11 15:02:40.916686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.335 qpair failed and we were unable to recover it. 00:25:58.335 [2024-12-11 15:02:40.916815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.335 [2024-12-11 15:02:40.916854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.335 qpair failed and we were unable to recover it. 00:25:58.335 [2024-12-11 15:02:40.916992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.335 [2024-12-11 15:02:40.917036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.335 qpair failed and we were unable to recover it. 00:25:58.335 [2024-12-11 15:02:40.917200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.335 [2024-12-11 15:02:40.917243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.335 qpair failed and we were unable to recover it. 00:25:58.335 [2024-12-11 15:02:40.917371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.335 [2024-12-11 15:02:40.917399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.335 qpair failed and we were unable to recover it. 00:25:58.335 [2024-12-11 15:02:40.917532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.335 [2024-12-11 15:02:40.917564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.335 qpair failed and we were unable to recover it. 00:25:58.335 [2024-12-11 15:02:40.917651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.335 [2024-12-11 15:02:40.917677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.335 qpair failed and we were unable to recover it. 00:25:58.335 [2024-12-11 15:02:40.917847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.335 [2024-12-11 15:02:40.917895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.335 qpair failed and we were unable to recover it. 00:25:58.335 [2024-12-11 15:02:40.917977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.335 [2024-12-11 15:02:40.918004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.335 qpair failed and we were unable to recover it. 00:25:58.335 [2024-12-11 15:02:40.918097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.335 [2024-12-11 15:02:40.918125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.335 qpair failed and we were unable to recover it. 00:25:58.335 [2024-12-11 15:02:40.918203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.335 [2024-12-11 15:02:40.918246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.335 qpair failed and we were unable to recover it. 00:25:58.335 [2024-12-11 15:02:40.918392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.335 [2024-12-11 15:02:40.918420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.336 qpair failed and we were unable to recover it. 00:25:58.336 [2024-12-11 15:02:40.918519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.336 [2024-12-11 15:02:40.918549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.336 qpair failed and we were unable to recover it. 00:25:58.336 [2024-12-11 15:02:40.918647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.336 [2024-12-11 15:02:40.918676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.336 qpair failed and we were unable to recover it. 00:25:58.336 [2024-12-11 15:02:40.918765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.336 [2024-12-11 15:02:40.918791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.336 qpair failed and we were unable to recover it. 00:25:58.336 [2024-12-11 15:02:40.918994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.336 [2024-12-11 15:02:40.919022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.336 qpair failed and we were unable to recover it. 00:25:58.336 [2024-12-11 15:02:40.919162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.336 [2024-12-11 15:02:40.919211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.336 qpair failed and we were unable to recover it. 00:25:58.336 [2024-12-11 15:02:40.919358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.336 [2024-12-11 15:02:40.919387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.336 qpair failed and we were unable to recover it. 00:25:58.336 [2024-12-11 15:02:40.919514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.336 [2024-12-11 15:02:40.919540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.336 qpair failed and we were unable to recover it. 00:25:58.336 [2024-12-11 15:02:40.919640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.336 [2024-12-11 15:02:40.919665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.336 qpair failed and we were unable to recover it. 00:25:58.336 [2024-12-11 15:02:40.919774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.336 [2024-12-11 15:02:40.919816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.336 qpair failed and we were unable to recover it. 00:25:58.336 [2024-12-11 15:02:40.919963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.336 [2024-12-11 15:02:40.919990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.336 qpair failed and we were unable to recover it. 00:25:58.336 [2024-12-11 15:02:40.920169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.336 [2024-12-11 15:02:40.920225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.336 qpair failed and we were unable to recover it. 00:25:58.336 [2024-12-11 15:02:40.920315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.336 [2024-12-11 15:02:40.920343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.336 qpair failed and we were unable to recover it. 00:25:58.336 [2024-12-11 15:02:40.920521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.336 [2024-12-11 15:02:40.920569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.336 qpair failed and we were unable to recover it. 00:25:58.336 [2024-12-11 15:02:40.920695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.336 [2024-12-11 15:02:40.920723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.336 qpair failed and we were unable to recover it. 00:25:58.336 [2024-12-11 15:02:40.920861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.336 [2024-12-11 15:02:40.920909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.336 qpair failed and we were unable to recover it. 00:25:58.336 [2024-12-11 15:02:40.920992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.336 [2024-12-11 15:02:40.921018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.336 qpair failed and we were unable to recover it. 00:25:58.336 [2024-12-11 15:02:40.921136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.336 [2024-12-11 15:02:40.921183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.336 qpair failed and we were unable to recover it. 00:25:58.336 [2024-12-11 15:02:40.921269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.336 [2024-12-11 15:02:40.921296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.336 qpair failed and we were unable to recover it. 00:25:58.336 [2024-12-11 15:02:40.921419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.336 [2024-12-11 15:02:40.921446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.336 qpair failed and we were unable to recover it. 00:25:58.336 [2024-12-11 15:02:40.921557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.336 [2024-12-11 15:02:40.921597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.336 qpair failed and we were unable to recover it. 00:25:58.336 [2024-12-11 15:02:40.921722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.336 [2024-12-11 15:02:40.921750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.336 qpair failed and we were unable to recover it. 00:25:58.336 [2024-12-11 15:02:40.921894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.336 [2024-12-11 15:02:40.921919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.336 qpair failed and we were unable to recover it. 00:25:58.336 [2024-12-11 15:02:40.922034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.336 [2024-12-11 15:02:40.922060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.336 qpair failed and we were unable to recover it. 00:25:58.336 [2024-12-11 15:02:40.922149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.336 [2024-12-11 15:02:40.922174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.336 qpair failed and we were unable to recover it. 00:25:58.336 [2024-12-11 15:02:40.922273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.336 [2024-12-11 15:02:40.922298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.336 qpair failed and we were unable to recover it. 00:25:58.336 [2024-12-11 15:02:40.922445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.336 [2024-12-11 15:02:40.922472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.336 qpair failed and we were unable to recover it. 00:25:58.336 [2024-12-11 15:02:40.922594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.336 [2024-12-11 15:02:40.922620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.336 qpair failed and we were unable to recover it. 00:25:58.336 [2024-12-11 15:02:40.922780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.336 [2024-12-11 15:02:40.922822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.336 qpair failed and we were unable to recover it. 00:25:58.336 [2024-12-11 15:02:40.922990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.336 [2024-12-11 15:02:40.923038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.336 qpair failed and we were unable to recover it. 00:25:58.336 [2024-12-11 15:02:40.923179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.336 [2024-12-11 15:02:40.923214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.336 qpair failed and we were unable to recover it. 00:25:58.336 [2024-12-11 15:02:40.923349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.336 [2024-12-11 15:02:40.923375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.336 qpair failed and we were unable to recover it. 00:25:58.336 [2024-12-11 15:02:40.923495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.336 [2024-12-11 15:02:40.923521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.336 qpair failed and we were unable to recover it. 00:25:58.336 [2024-12-11 15:02:40.923689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.336 [2024-12-11 15:02:40.923733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.336 qpair failed and we were unable to recover it. 00:25:58.336 [2024-12-11 15:02:40.923867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.336 [2024-12-11 15:02:40.923895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.336 qpair failed and we were unable to recover it. 00:25:58.336 [2024-12-11 15:02:40.924100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.336 [2024-12-11 15:02:40.924150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.336 qpair failed and we were unable to recover it. 00:25:58.336 [2024-12-11 15:02:40.924286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.336 [2024-12-11 15:02:40.924312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.336 qpair failed and we were unable to recover it. 00:25:58.336 [2024-12-11 15:02:40.924420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.336 [2024-12-11 15:02:40.924446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.336 qpair failed and we were unable to recover it. 00:25:58.336 [2024-12-11 15:02:40.924558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.336 [2024-12-11 15:02:40.924585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.336 qpair failed and we were unable to recover it. 00:25:58.336 [2024-12-11 15:02:40.924671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.336 [2024-12-11 15:02:40.924696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.336 qpair failed and we were unable to recover it. 00:25:58.337 [2024-12-11 15:02:40.924806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.337 [2024-12-11 15:02:40.924832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.337 qpair failed and we were unable to recover it. 00:25:58.337 [2024-12-11 15:02:40.924946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.337 [2024-12-11 15:02:40.924972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.337 qpair failed and we were unable to recover it. 00:25:58.337 [2024-12-11 15:02:40.925086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.337 [2024-12-11 15:02:40.925112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.337 qpair failed and we were unable to recover it. 00:25:58.337 [2024-12-11 15:02:40.925221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.337 [2024-12-11 15:02:40.925247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.337 qpair failed and we were unable to recover it. 00:25:58.337 [2024-12-11 15:02:40.925365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.337 [2024-12-11 15:02:40.925392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.337 qpair failed and we were unable to recover it. 00:25:58.337 [2024-12-11 15:02:40.925536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.337 [2024-12-11 15:02:40.925572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.337 qpair failed and we were unable to recover it. 00:25:58.337 [2024-12-11 15:02:40.925687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.337 [2024-12-11 15:02:40.925714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.337 qpair failed and we were unable to recover it. 00:25:58.337 [2024-12-11 15:02:40.925808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.337 [2024-12-11 15:02:40.925836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.337 qpair failed and we were unable to recover it. 00:25:58.337 [2024-12-11 15:02:40.925954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.337 [2024-12-11 15:02:40.925980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.337 qpair failed and we were unable to recover it. 00:25:58.337 [2024-12-11 15:02:40.926122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.337 [2024-12-11 15:02:40.926148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.337 qpair failed and we were unable to recover it. 00:25:58.337 [2024-12-11 15:02:40.926226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.337 [2024-12-11 15:02:40.926251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.337 qpair failed and we were unable to recover it. 00:25:58.337 [2024-12-11 15:02:40.926397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.337 [2024-12-11 15:02:40.926422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.337 qpair failed and we were unable to recover it. 00:25:58.337 [2024-12-11 15:02:40.926514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.337 [2024-12-11 15:02:40.926539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.337 qpair failed and we were unable to recover it. 00:25:58.337 [2024-12-11 15:02:40.926651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.337 [2024-12-11 15:02:40.926679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.337 qpair failed and we were unable to recover it. 00:25:58.337 [2024-12-11 15:02:40.926843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.337 [2024-12-11 15:02:40.926878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.337 qpair failed and we were unable to recover it. 00:25:58.337 [2024-12-11 15:02:40.927017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.337 [2024-12-11 15:02:40.927052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.337 qpair failed and we were unable to recover it. 00:25:58.337 [2024-12-11 15:02:40.927214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.337 [2024-12-11 15:02:40.927266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.337 qpair failed and we were unable to recover it. 00:25:58.337 [2024-12-11 15:02:40.927408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.337 [2024-12-11 15:02:40.927434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.337 qpair failed and we were unable to recover it. 00:25:58.337 [2024-12-11 15:02:40.927568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.337 [2024-12-11 15:02:40.927626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.337 qpair failed and we were unable to recover it. 00:25:58.337 [2024-12-11 15:02:40.927785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.337 [2024-12-11 15:02:40.927836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.337 qpair failed and we were unable to recover it. 00:25:58.337 [2024-12-11 15:02:40.927931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.337 [2024-12-11 15:02:40.927960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.337 qpair failed and we were unable to recover it. 00:25:58.337 [2024-12-11 15:02:40.928085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.337 [2024-12-11 15:02:40.928114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.337 qpair failed and we were unable to recover it. 00:25:58.337 [2024-12-11 15:02:40.928253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.337 [2024-12-11 15:02:40.928302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.337 qpair failed and we were unable to recover it. 00:25:58.337 [2024-12-11 15:02:40.928421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.337 [2024-12-11 15:02:40.928449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.337 qpair failed and we were unable to recover it. 00:25:58.337 [2024-12-11 15:02:40.928581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.337 [2024-12-11 15:02:40.928610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.337 qpair failed and we were unable to recover it. 00:25:58.337 [2024-12-11 15:02:40.928730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.337 [2024-12-11 15:02:40.928757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.337 qpair failed and we were unable to recover it. 00:25:58.337 [2024-12-11 15:02:40.928902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.337 [2024-12-11 15:02:40.928928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.337 qpair failed and we were unable to recover it. 00:25:58.337 [2024-12-11 15:02:40.929076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.337 [2024-12-11 15:02:40.929129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.337 qpair failed and we were unable to recover it. 00:25:58.337 [2024-12-11 15:02:40.929235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.337 [2024-12-11 15:02:40.929271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.337 qpair failed and we were unable to recover it. 00:25:58.337 [2024-12-11 15:02:40.929432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.337 [2024-12-11 15:02:40.929458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.337 qpair failed and we were unable to recover it. 00:25:58.337 [2024-12-11 15:02:40.929554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.337 [2024-12-11 15:02:40.929581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.337 qpair failed and we were unable to recover it. 00:25:58.337 [2024-12-11 15:02:40.929726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.337 [2024-12-11 15:02:40.929774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.337 qpair failed and we were unable to recover it. 00:25:58.337 [2024-12-11 15:02:40.929977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.337 [2024-12-11 15:02:40.930046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0de0000b90 with addr=10.0.0.2, port=4420 00:25:58.337 qpair failed and we were unable to recover it. 00:25:58.337 [2024-12-11 15:02:40.930338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.337 [2024-12-11 15:02:40.930393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.337 qpair failed and we were unable to recover it. 00:25:58.337 [2024-12-11 15:02:40.930485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.337 [2024-12-11 15:02:40.930513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.337 qpair failed and we were unable to recover it. 00:25:58.337 [2024-12-11 15:02:40.930674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.337 [2024-12-11 15:02:40.930736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.337 qpair failed and we were unable to recover it. 00:25:58.337 [2024-12-11 15:02:40.930891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.337 [2024-12-11 15:02:40.930945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.337 qpair failed and we were unable to recover it. 00:25:58.337 [2024-12-11 15:02:40.931091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.337 [2024-12-11 15:02:40.931143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.337 qpair failed and we were unable to recover it. 00:25:58.337 [2024-12-11 15:02:40.931263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.337 [2024-12-11 15:02:40.931291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.337 qpair failed and we were unable to recover it. 00:25:58.337 [2024-12-11 15:02:40.931435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.338 [2024-12-11 15:02:40.931463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.338 qpair failed and we were unable to recover it. 00:25:58.338 [2024-12-11 15:02:40.931620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.338 [2024-12-11 15:02:40.931647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.338 qpair failed and we were unable to recover it. 00:25:58.338 [2024-12-11 15:02:40.931764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.338 [2024-12-11 15:02:40.931792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.338 qpair failed and we were unable to recover it. 00:25:58.338 [2024-12-11 15:02:40.931951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.338 [2024-12-11 15:02:40.931994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.338 qpair failed and we were unable to recover it. 00:25:58.338 [2024-12-11 15:02:40.932130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.338 [2024-12-11 15:02:40.932172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.338 qpair failed and we were unable to recover it. 00:25:58.338 [2024-12-11 15:02:40.932313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.338 [2024-12-11 15:02:40.932357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.338 qpair failed and we were unable to recover it. 00:25:58.338 [2024-12-11 15:02:40.932516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.338 [2024-12-11 15:02:40.932552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.338 qpair failed and we were unable to recover it. 00:25:58.338 [2024-12-11 15:02:40.932686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.338 [2024-12-11 15:02:40.932728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.338 qpair failed and we were unable to recover it. 00:25:58.338 [2024-12-11 15:02:40.932853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.338 [2024-12-11 15:02:40.932880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.338 qpair failed and we were unable to recover it. 00:25:58.338 [2024-12-11 15:02:40.932989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.338 [2024-12-11 15:02:40.933017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.338 qpair failed and we were unable to recover it. 00:25:58.338 [2024-12-11 15:02:40.933172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.338 [2024-12-11 15:02:40.933214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.338 qpair failed and we were unable to recover it. 00:25:58.338 [2024-12-11 15:02:40.933309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.338 [2024-12-11 15:02:40.933339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.338 qpair failed and we were unable to recover it. 00:25:58.338 [2024-12-11 15:02:40.933445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.338 [2024-12-11 15:02:40.933471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.338 qpair failed and we were unable to recover it. 00:25:58.338 [2024-12-11 15:02:40.933584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.338 [2024-12-11 15:02:40.933611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.338 qpair failed and we were unable to recover it. 00:25:58.338 [2024-12-11 15:02:40.933726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.338 [2024-12-11 15:02:40.933754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.338 qpair failed and we were unable to recover it. 00:25:58.338 [2024-12-11 15:02:40.933901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.338 [2024-12-11 15:02:40.933927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.338 qpair failed and we were unable to recover it. 00:25:58.338 [2024-12-11 15:02:40.934039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.338 [2024-12-11 15:02:40.934065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.338 qpair failed and we were unable to recover it. 00:25:58.338 [2024-12-11 15:02:40.934212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.338 [2024-12-11 15:02:40.934238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.338 qpair failed and we were unable to recover it. 00:25:58.338 [2024-12-11 15:02:40.934355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.338 [2024-12-11 15:02:40.934380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.338 qpair failed and we were unable to recover it. 00:25:58.338 [2024-12-11 15:02:40.934526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.338 [2024-12-11 15:02:40.934572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.338 qpair failed and we were unable to recover it. 00:25:58.338 [2024-12-11 15:02:40.934697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.338 [2024-12-11 15:02:40.934724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.338 qpair failed and we were unable to recover it. 00:25:58.338 [2024-12-11 15:02:40.934867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.338 [2024-12-11 15:02:40.934910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.338 qpair failed and we were unable to recover it. 00:25:58.338 [2024-12-11 15:02:40.935129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.338 [2024-12-11 15:02:40.935174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.338 qpair failed and we were unable to recover it. 00:25:58.338 [2024-12-11 15:02:40.935327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.338 [2024-12-11 15:02:40.935356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.338 qpair failed and we were unable to recover it. 00:25:58.338 [2024-12-11 15:02:40.935452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.338 [2024-12-11 15:02:40.935480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.338 qpair failed and we were unable to recover it. 00:25:58.338 [2024-12-11 15:02:40.935638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.338 [2024-12-11 15:02:40.935666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.338 qpair failed and we were unable to recover it. 00:25:58.338 [2024-12-11 15:02:40.935760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.338 [2024-12-11 15:02:40.935787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.338 qpair failed and we were unable to recover it. 00:25:58.338 [2024-12-11 15:02:40.935913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.338 [2024-12-11 15:02:40.935984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.338 qpair failed and we were unable to recover it. 00:25:58.338 [2024-12-11 15:02:40.936130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.338 [2024-12-11 15:02:40.936159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.338 qpair failed and we were unable to recover it. 00:25:58.338 [2024-12-11 15:02:40.936278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.338 [2024-12-11 15:02:40.936306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.338 qpair failed and we were unable to recover it. 00:25:58.338 [2024-12-11 15:02:40.936448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.338 [2024-12-11 15:02:40.936474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.338 qpair failed and we were unable to recover it. 00:25:58.338 [2024-12-11 15:02:40.936619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.338 [2024-12-11 15:02:40.936647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.338 qpair failed and we were unable to recover it. 00:25:58.338 [2024-12-11 15:02:40.936756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.338 [2024-12-11 15:02:40.936782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.338 qpair failed and we were unable to recover it. 00:25:58.338 [2024-12-11 15:02:40.936932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.338 [2024-12-11 15:02:40.936968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0de0000b90 with addr=10.0.0.2, port=4420 00:25:58.338 qpair failed and we were unable to recover it. 00:25:58.338 [2024-12-11 15:02:40.937098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.338 [2024-12-11 15:02:40.937145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0de0000b90 with addr=10.0.0.2, port=4420 00:25:58.338 qpair failed and we were unable to recover it. 00:25:58.338 [2024-12-11 15:02:40.937285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.338 [2024-12-11 15:02:40.937315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0de0000b90 with addr=10.0.0.2, port=4420 00:25:58.338 qpair failed and we were unable to recover it. 00:25:58.338 [2024-12-11 15:02:40.937456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.338 [2024-12-11 15:02:40.937488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0de0000b90 with addr=10.0.0.2, port=4420 00:25:58.338 qpair failed and we were unable to recover it. 00:25:58.338 [2024-12-11 15:02:40.937655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.338 [2024-12-11 15:02:40.937684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0de0000b90 with addr=10.0.0.2, port=4420 00:25:58.338 qpair failed and we were unable to recover it. 00:25:58.338 [2024-12-11 15:02:40.937878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.338 [2024-12-11 15:02:40.937980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.338 qpair failed and we were unable to recover it. 00:25:58.338 [2024-12-11 15:02:40.938288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.339 [2024-12-11 15:02:40.938328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.339 qpair failed and we were unable to recover it. 00:25:58.339 [2024-12-11 15:02:40.938455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.339 [2024-12-11 15:02:40.938481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.339 qpair failed and we were unable to recover it. 00:25:58.339 [2024-12-11 15:02:40.938596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.339 [2024-12-11 15:02:40.938623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.339 qpair failed and we were unable to recover it. 00:25:58.339 [2024-12-11 15:02:40.938739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.339 [2024-12-11 15:02:40.938765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.339 qpair failed and we were unable to recover it. 00:25:58.339 [2024-12-11 15:02:40.938892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.339 [2024-12-11 15:02:40.938918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.339 qpair failed and we were unable to recover it. 00:25:58.339 [2024-12-11 15:02:40.939034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.339 [2024-12-11 15:02:40.939061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.339 qpair failed and we were unable to recover it. 00:25:58.339 [2024-12-11 15:02:40.939192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.339 [2024-12-11 15:02:40.939261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.339 qpair failed and we were unable to recover it. 00:25:58.339 [2024-12-11 15:02:40.939366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.339 [2024-12-11 15:02:40.939392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.339 qpair failed and we were unable to recover it. 00:25:58.339 [2024-12-11 15:02:40.939563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.339 [2024-12-11 15:02:40.939605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.339 qpair failed and we were unable to recover it. 00:25:58.339 [2024-12-11 15:02:40.939720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.339 [2024-12-11 15:02:40.939746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.339 qpair failed and we were unable to recover it. 00:25:58.339 [2024-12-11 15:02:40.939843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.339 [2024-12-11 15:02:40.939869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.339 qpair failed and we were unable to recover it. 00:25:58.339 [2024-12-11 15:02:40.939955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.339 [2024-12-11 15:02:40.939981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.339 qpair failed and we were unable to recover it. 00:25:58.339 [2024-12-11 15:02:40.940114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.339 [2024-12-11 15:02:40.940142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.339 qpair failed and we were unable to recover it. 00:25:58.339 [2024-12-11 15:02:40.940260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.339 [2024-12-11 15:02:40.940287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.339 qpair failed and we were unable to recover it. 00:25:58.339 [2024-12-11 15:02:40.940395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.339 [2024-12-11 15:02:40.940422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.339 qpair failed and we were unable to recover it. 00:25:58.339 [2024-12-11 15:02:40.940553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.339 [2024-12-11 15:02:40.940581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.339 qpair failed and we were unable to recover it. 00:25:58.339 [2024-12-11 15:02:40.940710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.339 [2024-12-11 15:02:40.940737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.339 qpair failed and we were unable to recover it. 00:25:58.339 [2024-12-11 15:02:40.940856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.339 [2024-12-11 15:02:40.940881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.339 qpair failed and we were unable to recover it. 00:25:58.339 [2024-12-11 15:02:40.941027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.339 [2024-12-11 15:02:40.941053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.339 qpair failed and we were unable to recover it. 00:25:58.339 [2024-12-11 15:02:40.941197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.339 [2024-12-11 15:02:40.941240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.339 qpair failed and we were unable to recover it. 00:25:58.339 [2024-12-11 15:02:40.941353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.339 [2024-12-11 15:02:40.941381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.339 qpair failed and we were unable to recover it. 00:25:58.339 [2024-12-11 15:02:40.941522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.339 [2024-12-11 15:02:40.941571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.339 qpair failed and we were unable to recover it. 00:25:58.339 [2024-12-11 15:02:40.941688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.339 [2024-12-11 15:02:40.941728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.339 qpair failed and we were unable to recover it. 00:25:58.339 [2024-12-11 15:02:40.941847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.339 [2024-12-11 15:02:40.941876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.339 qpair failed and we were unable to recover it. 00:25:58.339 [2024-12-11 15:02:40.942001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.339 [2024-12-11 15:02:40.942044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.339 qpair failed and we were unable to recover it. 00:25:58.339 [2024-12-11 15:02:40.942172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.339 [2024-12-11 15:02:40.942216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.339 qpair failed and we were unable to recover it. 00:25:58.339 [2024-12-11 15:02:40.942356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.339 [2024-12-11 15:02:40.942382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.339 qpair failed and we were unable to recover it. 00:25:58.339 [2024-12-11 15:02:40.942500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.339 [2024-12-11 15:02:40.942528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.339 qpair failed and we were unable to recover it. 00:25:58.339 [2024-12-11 15:02:40.942662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.339 [2024-12-11 15:02:40.942691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.339 qpair failed and we were unable to recover it. 00:25:58.339 [2024-12-11 15:02:40.942817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.339 [2024-12-11 15:02:40.942843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.339 qpair failed and we were unable to recover it. 00:25:58.339 [2024-12-11 15:02:40.942983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.339 [2024-12-11 15:02:40.943021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.339 qpair failed and we were unable to recover it. 00:25:58.339 [2024-12-11 15:02:40.943206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.339 [2024-12-11 15:02:40.943271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.339 qpair failed and we were unable to recover it. 00:25:58.339 [2024-12-11 15:02:40.943460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.339 [2024-12-11 15:02:40.943523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.339 qpair failed and we were unable to recover it. 00:25:58.339 [2024-12-11 15:02:40.943740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.339 [2024-12-11 15:02:40.943765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.339 qpair failed and we were unable to recover it. 00:25:58.339 [2024-12-11 15:02:40.943940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.339 [2024-12-11 15:02:40.944017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.339 qpair failed and we were unable to recover it. 00:25:58.339 [2024-12-11 15:02:40.944234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.340 [2024-12-11 15:02:40.944298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.340 qpair failed and we were unable to recover it. 00:25:58.340 [2024-12-11 15:02:40.944518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.340 [2024-12-11 15:02:40.944552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.340 qpair failed and we were unable to recover it. 00:25:58.340 [2024-12-11 15:02:40.944648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.340 [2024-12-11 15:02:40.944674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.340 qpair failed and we were unable to recover it. 00:25:58.340 [2024-12-11 15:02:40.944795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.340 [2024-12-11 15:02:40.944835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.340 qpair failed and we were unable to recover it. 00:25:58.340 [2024-12-11 15:02:40.944970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.340 [2024-12-11 15:02:40.944995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.340 qpair failed and we were unable to recover it. 00:25:58.340 [2024-12-11 15:02:40.945182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.340 [2024-12-11 15:02:40.945218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.340 qpair failed and we were unable to recover it. 00:25:58.340 [2024-12-11 15:02:40.945457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.340 [2024-12-11 15:02:40.945520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.340 qpair failed and we were unable to recover it. 00:25:58.340 [2024-12-11 15:02:40.945683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.340 [2024-12-11 15:02:40.945709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.340 qpair failed and we were unable to recover it. 00:25:58.340 [2024-12-11 15:02:40.945802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.340 [2024-12-11 15:02:40.945827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.340 qpair failed and we were unable to recover it. 00:25:58.340 [2024-12-11 15:02:40.945948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.340 [2024-12-11 15:02:40.945975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.340 qpair failed and we were unable to recover it. 00:25:58.340 [2024-12-11 15:02:40.946104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.340 [2024-12-11 15:02:40.946130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.340 qpair failed and we were unable to recover it. 00:25:58.340 [2024-12-11 15:02:40.946325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.340 [2024-12-11 15:02:40.946389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.340 qpair failed and we were unable to recover it. 00:25:58.340 [2024-12-11 15:02:40.946619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.340 [2024-12-11 15:02:40.946645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.340 qpair failed and we were unable to recover it. 00:25:58.340 [2024-12-11 15:02:40.946767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.340 [2024-12-11 15:02:40.946794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.340 qpair failed and we were unable to recover it. 00:25:58.340 [2024-12-11 15:02:40.946916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.340 [2024-12-11 15:02:40.946958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.340 qpair failed and we were unable to recover it. 00:25:58.340 [2024-12-11 15:02:40.947139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.340 [2024-12-11 15:02:40.947177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.340 qpair failed and we were unable to recover it. 00:25:58.340 [2024-12-11 15:02:40.947346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.340 [2024-12-11 15:02:40.947386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.340 qpair failed and we were unable to recover it. 00:25:58.340 [2024-12-11 15:02:40.947526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.340 [2024-12-11 15:02:40.947559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.340 qpair failed and we were unable to recover it. 00:25:58.340 [2024-12-11 15:02:40.947658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.340 [2024-12-11 15:02:40.947683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.340 qpair failed and we were unable to recover it. 00:25:58.340 [2024-12-11 15:02:40.947794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.340 [2024-12-11 15:02:40.947820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.340 qpair failed and we were unable to recover it. 00:25:58.340 [2024-12-11 15:02:40.947925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.340 [2024-12-11 15:02:40.947951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.340 qpair failed and we were unable to recover it. 00:25:58.340 [2024-12-11 15:02:40.948172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.340 [2024-12-11 15:02:40.948235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.340 qpair failed and we were unable to recover it. 00:25:58.340 [2024-12-11 15:02:40.948477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.340 [2024-12-11 15:02:40.948541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.340 qpair failed and we were unable to recover it. 00:25:58.340 [2024-12-11 15:02:40.948714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.340 [2024-12-11 15:02:40.948739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.340 qpair failed and we were unable to recover it. 00:25:58.340 [2024-12-11 15:02:40.948829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.340 [2024-12-11 15:02:40.948854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.340 qpair failed and we were unable to recover it. 00:25:58.340 [2024-12-11 15:02:40.949036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.340 [2024-12-11 15:02:40.949098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.340 qpair failed and we were unable to recover it. 00:25:58.340 [2024-12-11 15:02:40.949335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.340 [2024-12-11 15:02:40.949399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.340 qpair failed and we were unable to recover it. 00:25:58.340 [2024-12-11 15:02:40.949652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.340 [2024-12-11 15:02:40.949678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.340 qpair failed and we were unable to recover it. 00:25:58.340 [2024-12-11 15:02:40.949794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.340 [2024-12-11 15:02:40.949819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.340 qpair failed and we were unable to recover it. 00:25:58.340 [2024-12-11 15:02:40.949957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.340 [2024-12-11 15:02:40.949984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.340 qpair failed and we were unable to recover it. 00:25:58.340 [2024-12-11 15:02:40.950142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.340 [2024-12-11 15:02:40.950205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.340 qpair failed and we were unable to recover it. 00:25:58.340 [2024-12-11 15:02:40.950521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.340 [2024-12-11 15:02:40.950556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.340 qpair failed and we were unable to recover it. 00:25:58.340 [2024-12-11 15:02:40.950662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.340 [2024-12-11 15:02:40.950687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.340 qpair failed and we were unable to recover it. 00:25:58.340 [2024-12-11 15:02:40.950804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.340 [2024-12-11 15:02:40.950830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.340 qpair failed and we were unable to recover it. 00:25:58.340 [2024-12-11 15:02:40.950965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.340 [2024-12-11 15:02:40.951026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.340 qpair failed and we were unable to recover it. 00:25:58.340 [2024-12-11 15:02:40.951218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.340 [2024-12-11 15:02:40.951281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.340 qpair failed and we were unable to recover it. 00:25:58.340 [2024-12-11 15:02:40.951560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.340 [2024-12-11 15:02:40.951589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.340 qpair failed and we were unable to recover it. 00:25:58.340 [2024-12-11 15:02:40.951741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.340 [2024-12-11 15:02:40.951766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.340 qpair failed and we were unable to recover it. 00:25:58.340 [2024-12-11 15:02:40.951861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.340 [2024-12-11 15:02:40.951887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.340 qpair failed and we were unable to recover it. 00:25:58.340 [2024-12-11 15:02:40.952042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.340 [2024-12-11 15:02:40.952108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.340 qpair failed and we were unable to recover it. 00:25:58.341 [2024-12-11 15:02:40.952324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.341 [2024-12-11 15:02:40.952388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.341 qpair failed and we were unable to recover it. 00:25:58.341 [2024-12-11 15:02:40.952574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.341 [2024-12-11 15:02:40.952617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.341 qpair failed and we were unable to recover it. 00:25:58.341 [2024-12-11 15:02:40.952702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.341 [2024-12-11 15:02:40.952728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.341 qpair failed and we were unable to recover it. 00:25:58.341 [2024-12-11 15:02:40.952840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.341 [2024-12-11 15:02:40.952866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.341 qpair failed and we were unable to recover it. 00:25:58.341 [2024-12-11 15:02:40.952949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.341 [2024-12-11 15:02:40.952974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.341 qpair failed and we were unable to recover it. 00:25:58.341 [2024-12-11 15:02:40.953207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.341 [2024-12-11 15:02:40.953269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.341 qpair failed and we were unable to recover it. 00:25:58.341 [2024-12-11 15:02:40.953459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.341 [2024-12-11 15:02:40.953523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.341 qpair failed and we were unable to recover it. 00:25:58.341 [2024-12-11 15:02:40.953678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.341 [2024-12-11 15:02:40.953705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.341 qpair failed and we were unable to recover it. 00:25:58.341 [2024-12-11 15:02:40.953793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.341 [2024-12-11 15:02:40.953818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.341 qpair failed and we were unable to recover it. 00:25:58.341 [2024-12-11 15:02:40.953950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.341 [2024-12-11 15:02:40.953975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.341 qpair failed and we were unable to recover it. 00:25:58.341 [2024-12-11 15:02:40.954096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.341 [2024-12-11 15:02:40.954121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.341 qpair failed and we were unable to recover it. 00:25:58.341 [2024-12-11 15:02:40.954282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.341 [2024-12-11 15:02:40.954345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.341 qpair failed and we were unable to recover it. 00:25:58.341 [2024-12-11 15:02:40.954571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.341 [2024-12-11 15:02:40.954629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.341 qpair failed and we were unable to recover it. 00:25:58.341 [2024-12-11 15:02:40.954758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.341 [2024-12-11 15:02:40.954784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.341 qpair failed and we were unable to recover it. 00:25:58.341 [2024-12-11 15:02:40.955015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.341 [2024-12-11 15:02:40.955078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.341 qpair failed and we were unable to recover it. 00:25:58.341 [2024-12-11 15:02:40.955309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.341 [2024-12-11 15:02:40.955372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.341 qpair failed and we were unable to recover it. 00:25:58.341 [2024-12-11 15:02:40.955539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.341 [2024-12-11 15:02:40.955572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.341 qpair failed and we were unable to recover it. 00:25:58.341 [2024-12-11 15:02:40.955686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.341 [2024-12-11 15:02:40.955711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.341 qpair failed and we were unable to recover it. 00:25:58.341 [2024-12-11 15:02:40.955813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.341 [2024-12-11 15:02:40.955879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.341 qpair failed and we were unable to recover it. 00:25:58.341 [2024-12-11 15:02:40.956120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.341 [2024-12-11 15:02:40.956183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.341 qpair failed and we were unable to recover it. 00:25:58.341 [2024-12-11 15:02:40.956480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.341 [2024-12-11 15:02:40.956543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.341 qpair failed and we were unable to recover it. 00:25:58.341 [2024-12-11 15:02:40.956725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.341 [2024-12-11 15:02:40.956752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.341 qpair failed and we were unable to recover it. 00:25:58.341 [2024-12-11 15:02:40.956908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.341 [2024-12-11 15:02:40.956971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.341 qpair failed and we were unable to recover it. 00:25:58.341 [2024-12-11 15:02:40.957227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.341 [2024-12-11 15:02:40.957290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.341 qpair failed and we were unable to recover it. 00:25:58.341 [2024-12-11 15:02:40.957543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.341 [2024-12-11 15:02:40.957612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.341 qpair failed and we were unable to recover it. 00:25:58.341 [2024-12-11 15:02:40.957728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.341 [2024-12-11 15:02:40.957755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.341 qpair failed and we were unable to recover it. 00:25:58.341 [2024-12-11 15:02:40.957917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.341 [2024-12-11 15:02:40.957981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.341 qpair failed and we were unable to recover it. 00:25:58.341 [2024-12-11 15:02:40.958186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.341 [2024-12-11 15:02:40.958259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.341 qpair failed and we were unable to recover it. 00:25:58.341 [2024-12-11 15:02:40.958593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.341 [2024-12-11 15:02:40.958621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.341 qpair failed and we were unable to recover it. 00:25:58.341 [2024-12-11 15:02:40.958745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.341 [2024-12-11 15:02:40.958772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.341 qpair failed and we were unable to recover it. 00:25:58.341 [2024-12-11 15:02:40.958945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.341 [2024-12-11 15:02:40.959008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.341 qpair failed and we were unable to recover it. 00:25:58.341 [2024-12-11 15:02:40.959139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.341 [2024-12-11 15:02:40.959188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.341 qpair failed and we were unable to recover it. 00:25:58.341 [2024-12-11 15:02:40.959384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.341 [2024-12-11 15:02:40.959422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.341 qpair failed and we were unable to recover it. 00:25:58.341 [2024-12-11 15:02:40.959576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.341 [2024-12-11 15:02:40.959631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.341 qpair failed and we were unable to recover it. 00:25:58.341 [2024-12-11 15:02:40.959747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.341 [2024-12-11 15:02:40.959773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.341 qpair failed and we were unable to recover it. 00:25:58.341 [2024-12-11 15:02:40.959894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.341 [2024-12-11 15:02:40.959921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.341 qpair failed and we were unable to recover it. 00:25:58.341 [2024-12-11 15:02:40.960046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.341 [2024-12-11 15:02:40.960083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.341 qpair failed and we were unable to recover it. 00:25:58.341 [2024-12-11 15:02:40.960242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.341 [2024-12-11 15:02:40.960277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.341 qpair failed and we were unable to recover it. 00:25:58.341 [2024-12-11 15:02:40.960425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.341 [2024-12-11 15:02:40.960479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.341 qpair failed and we were unable to recover it. 00:25:58.341 [2024-12-11 15:02:40.960615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.341 [2024-12-11 15:02:40.960643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.342 qpair failed and we were unable to recover it. 00:25:58.342 [2024-12-11 15:02:40.960762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.342 [2024-12-11 15:02:40.960789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.342 qpair failed and we were unable to recover it. 00:25:58.342 [2024-12-11 15:02:40.960914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.342 [2024-12-11 15:02:40.960959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.342 qpair failed and we were unable to recover it. 00:25:58.342 [2024-12-11 15:02:40.961124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.342 [2024-12-11 15:02:40.961162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.342 qpair failed and we were unable to recover it. 00:25:58.342 [2024-12-11 15:02:40.961405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.342 [2024-12-11 15:02:40.961467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.342 qpair failed and we were unable to recover it. 00:25:58.342 [2024-12-11 15:02:40.961663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.342 [2024-12-11 15:02:40.961690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.342 qpair failed and we were unable to recover it. 00:25:58.342 [2024-12-11 15:02:40.961813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.342 [2024-12-11 15:02:40.961840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.342 qpair failed and we were unable to recover it. 00:25:58.342 [2024-12-11 15:02:40.961940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.342 [2024-12-11 15:02:40.961967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.342 qpair failed and we were unable to recover it. 00:25:58.342 [2024-12-11 15:02:40.962052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.342 [2024-12-11 15:02:40.962079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.342 qpair failed and we were unable to recover it. 00:25:58.342 [2024-12-11 15:02:40.962202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.342 [2024-12-11 15:02:40.962229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.342 qpair failed and we were unable to recover it. 00:25:58.342 [2024-12-11 15:02:40.962378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.342 [2024-12-11 15:02:40.962411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.342 qpair failed and we were unable to recover it. 00:25:58.342 [2024-12-11 15:02:40.962615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.342 [2024-12-11 15:02:40.962643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.342 qpair failed and we were unable to recover it. 00:25:58.342 [2024-12-11 15:02:40.962724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.342 [2024-12-11 15:02:40.962751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.342 qpair failed and we were unable to recover it. 00:25:58.342 [2024-12-11 15:02:40.962846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.342 [2024-12-11 15:02:40.962873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.342 qpair failed and we were unable to recover it. 00:25:58.342 [2024-12-11 15:02:40.962993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.342 [2024-12-11 15:02:40.963020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.342 qpair failed and we were unable to recover it. 00:25:58.342 [2024-12-11 15:02:40.963157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.342 [2024-12-11 15:02:40.963215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.342 qpair failed and we were unable to recover it. 00:25:58.342 [2024-12-11 15:02:40.963470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.342 [2024-12-11 15:02:40.963534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.342 qpair failed and we were unable to recover it. 00:25:58.342 [2024-12-11 15:02:40.963729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.342 [2024-12-11 15:02:40.963762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.342 qpair failed and we were unable to recover it. 00:25:58.342 [2024-12-11 15:02:40.963860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.342 [2024-12-11 15:02:40.963893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.342 qpair failed and we were unable to recover it. 00:25:58.342 [2024-12-11 15:02:40.964164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.342 [2024-12-11 15:02:40.964227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.342 qpair failed and we were unable to recover it. 00:25:58.342 [2024-12-11 15:02:40.964523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.342 [2024-12-11 15:02:40.964613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.342 qpair failed and we were unable to recover it. 00:25:58.342 [2024-12-11 15:02:40.964721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.342 [2024-12-11 15:02:40.964754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.342 qpair failed and we were unable to recover it. 00:25:58.342 [2024-12-11 15:02:40.964895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.342 [2024-12-11 15:02:40.964929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.342 qpair failed and we were unable to recover it. 00:25:58.342 [2024-12-11 15:02:40.966052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.342 [2024-12-11 15:02:40.966086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.342 qpair failed and we were unable to recover it. 00:25:58.342 [2024-12-11 15:02:40.966227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.342 [2024-12-11 15:02:40.966257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.342 qpair failed and we were unable to recover it. 00:25:58.342 [2024-12-11 15:02:40.966362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.342 [2024-12-11 15:02:40.966392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.342 qpair failed and we were unable to recover it. 00:25:58.342 [2024-12-11 15:02:40.966555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.342 [2024-12-11 15:02:40.966585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.342 qpair failed and we were unable to recover it. 00:25:58.342 [2024-12-11 15:02:40.966693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.342 [2024-12-11 15:02:40.966721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.342 qpair failed and we were unable to recover it. 00:25:58.342 [2024-12-11 15:02:40.966848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.342 [2024-12-11 15:02:40.966877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.342 qpair failed and we were unable to recover it. 00:25:58.342 [2024-12-11 15:02:40.967031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.342 [2024-12-11 15:02:40.967060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.342 qpair failed and we were unable to recover it. 00:25:58.342 [2024-12-11 15:02:40.967180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.342 [2024-12-11 15:02:40.967209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.342 qpair failed and we were unable to recover it. 00:25:58.342 [2024-12-11 15:02:40.967357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.342 [2024-12-11 15:02:40.967385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.342 qpair failed and we were unable to recover it. 00:25:58.342 [2024-12-11 15:02:40.967541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.342 [2024-12-11 15:02:40.967591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.342 qpair failed and we were unable to recover it. 00:25:58.342 [2024-12-11 15:02:40.968289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.342 [2024-12-11 15:02:40.968323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.342 qpair failed and we were unable to recover it. 00:25:58.342 [2024-12-11 15:02:40.968463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.342 [2024-12-11 15:02:40.968494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.342 qpair failed and we were unable to recover it. 00:25:58.342 [2024-12-11 15:02:40.969186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.342 [2024-12-11 15:02:40.969220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.342 qpair failed and we were unable to recover it. 00:25:58.342 [2024-12-11 15:02:40.969350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.342 [2024-12-11 15:02:40.969379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.342 qpair failed and we were unable to recover it. 00:25:58.342 [2024-12-11 15:02:40.969488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.342 [2024-12-11 15:02:40.969516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.342 qpair failed and we were unable to recover it. 00:25:58.342 [2024-12-11 15:02:40.969625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.342 [2024-12-11 15:02:40.969654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.342 qpair failed and we were unable to recover it. 00:25:58.342 [2024-12-11 15:02:40.969756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.342 [2024-12-11 15:02:40.969785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.342 qpair failed and we were unable to recover it. 00:25:58.343 [2024-12-11 15:02:40.969939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.343 [2024-12-11 15:02:40.969967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.343 qpair failed and we were unable to recover it. 00:25:58.343 [2024-12-11 15:02:40.970083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.343 [2024-12-11 15:02:40.970112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.343 qpair failed and we were unable to recover it. 00:25:58.343 [2024-12-11 15:02:40.970233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.343 [2024-12-11 15:02:40.970266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.343 qpair failed and we were unable to recover it. 00:25:58.343 [2024-12-11 15:02:40.970353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.343 [2024-12-11 15:02:40.970382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.343 qpair failed and we were unable to recover it. 00:25:58.343 [2024-12-11 15:02:40.970507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.343 [2024-12-11 15:02:40.970535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.343 qpair failed and we were unable to recover it. 00:25:58.343 [2024-12-11 15:02:40.970640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.343 [2024-12-11 15:02:40.970669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.343 qpair failed and we were unable to recover it. 00:25:58.343 [2024-12-11 15:02:40.970786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.343 [2024-12-11 15:02:40.970815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.343 qpair failed and we were unable to recover it. 00:25:58.343 [2024-12-11 15:02:40.970908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.343 [2024-12-11 15:02:40.970936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.343 qpair failed and we were unable to recover it. 00:25:58.343 [2024-12-11 15:02:40.971096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.343 [2024-12-11 15:02:40.971125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.343 qpair failed and we were unable to recover it. 00:25:58.343 [2024-12-11 15:02:40.971243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.343 [2024-12-11 15:02:40.971272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.343 qpair failed and we were unable to recover it. 00:25:58.343 [2024-12-11 15:02:40.971398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.343 [2024-12-11 15:02:40.971426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.343 qpair failed and we were unable to recover it. 00:25:58.343 [2024-12-11 15:02:40.971508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.343 [2024-12-11 15:02:40.971536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.343 qpair failed and we were unable to recover it. 00:25:58.343 [2024-12-11 15:02:40.971668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.343 [2024-12-11 15:02:40.971696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.343 qpair failed and we were unable to recover it. 00:25:58.343 [2024-12-11 15:02:40.971794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.343 [2024-12-11 15:02:40.971822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.343 qpair failed and we were unable to recover it. 00:25:58.343 [2024-12-11 15:02:40.972517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.343 [2024-12-11 15:02:40.972559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.343 qpair failed and we were unable to recover it. 00:25:58.343 [2024-12-11 15:02:40.972706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.343 [2024-12-11 15:02:40.972736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.343 qpair failed and we were unable to recover it. 00:25:58.343 [2024-12-11 15:02:40.972851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.343 [2024-12-11 15:02:40.972895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.343 qpair failed and we were unable to recover it. 00:25:58.343 [2024-12-11 15:02:40.973028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.343 [2024-12-11 15:02:40.973058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.343 qpair failed and we were unable to recover it. 00:25:58.343 [2024-12-11 15:02:40.973186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.343 [2024-12-11 15:02:40.973216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.343 qpair failed and we were unable to recover it. 00:25:58.343 [2024-12-11 15:02:40.973348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.343 [2024-12-11 15:02:40.973378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.343 qpair failed and we were unable to recover it. 00:25:58.343 [2024-12-11 15:02:40.973466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.343 [2024-12-11 15:02:40.973495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.343 qpair failed and we were unable to recover it. 00:25:58.343 [2024-12-11 15:02:40.973633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.343 [2024-12-11 15:02:40.973664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.343 qpair failed and we were unable to recover it. 00:25:58.343 [2024-12-11 15:02:40.973786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.343 [2024-12-11 15:02:40.973816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.343 qpair failed and we were unable to recover it. 00:25:58.343 [2024-12-11 15:02:40.973951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.343 [2024-12-11 15:02:40.973980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.343 qpair failed and we were unable to recover it. 00:25:58.343 [2024-12-11 15:02:40.974148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.343 [2024-12-11 15:02:40.974177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.343 qpair failed and we were unable to recover it. 00:25:58.343 [2024-12-11 15:02:40.974303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.343 [2024-12-11 15:02:40.974332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.343 qpair failed and we were unable to recover it. 00:25:58.343 [2024-12-11 15:02:40.974500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.343 [2024-12-11 15:02:40.974533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.343 qpair failed and we were unable to recover it. 00:25:58.343 [2024-12-11 15:02:40.974674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.343 [2024-12-11 15:02:40.974703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.343 qpair failed and we were unable to recover it. 00:25:58.343 [2024-12-11 15:02:40.974806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.343 [2024-12-11 15:02:40.974835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.343 qpair failed and we were unable to recover it. 00:25:58.343 [2024-12-11 15:02:40.974932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.343 [2024-12-11 15:02:40.974965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.343 qpair failed and we were unable to recover it. 00:25:58.343 [2024-12-11 15:02:40.975066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.343 [2024-12-11 15:02:40.975094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.343 qpair failed and we were unable to recover it. 00:25:58.343 [2024-12-11 15:02:40.975244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.343 [2024-12-11 15:02:40.975272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.343 qpair failed and we were unable to recover it. 00:25:58.343 [2024-12-11 15:02:40.975410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.343 [2024-12-11 15:02:40.975440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.343 qpair failed and we were unable to recover it. 00:25:58.343 [2024-12-11 15:02:40.975590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.343 [2024-12-11 15:02:40.975619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.343 qpair failed and we were unable to recover it. 00:25:58.343 [2024-12-11 15:02:40.975748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.343 [2024-12-11 15:02:40.975776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.343 qpair failed and we were unable to recover it. 00:25:58.343 [2024-12-11 15:02:40.976468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.343 [2024-12-11 15:02:40.976501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.343 qpair failed and we were unable to recover it. 00:25:58.343 [2024-12-11 15:02:40.976667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.343 [2024-12-11 15:02:40.976698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.344 qpair failed and we were unable to recover it. 00:25:58.344 [2024-12-11 15:02:40.976791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.344 [2024-12-11 15:02:40.976819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.344 qpair failed and we were unable to recover it. 00:25:58.344 [2024-12-11 15:02:40.976916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.344 [2024-12-11 15:02:40.976945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.344 qpair failed and we were unable to recover it. 00:25:58.344 [2024-12-11 15:02:40.977097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.344 [2024-12-11 15:02:40.977126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.344 qpair failed and we were unable to recover it. 00:25:58.344 [2024-12-11 15:02:40.977282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.344 [2024-12-11 15:02:40.977310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.344 qpair failed and we were unable to recover it. 00:25:58.344 [2024-12-11 15:02:40.977463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.344 [2024-12-11 15:02:40.977491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.344 qpair failed and we were unable to recover it. 00:25:58.344 [2024-12-11 15:02:40.977609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.344 [2024-12-11 15:02:40.977638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.344 qpair failed and we were unable to recover it. 00:25:58.344 [2024-12-11 15:02:40.977773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.344 [2024-12-11 15:02:40.977829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.344 qpair failed and we were unable to recover it. 00:25:58.344 [2024-12-11 15:02:40.977962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.344 [2024-12-11 15:02:40.977992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.344 qpair failed and we were unable to recover it. 00:25:58.344 [2024-12-11 15:02:40.978088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.344 [2024-12-11 15:02:40.978117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.344 qpair failed and we were unable to recover it. 00:25:58.344 [2024-12-11 15:02:40.978277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.344 [2024-12-11 15:02:40.978307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.344 qpair failed and we were unable to recover it. 00:25:58.344 [2024-12-11 15:02:40.978463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.344 [2024-12-11 15:02:40.978492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.344 qpair failed and we were unable to recover it. 00:25:58.344 [2024-12-11 15:02:40.978621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.344 [2024-12-11 15:02:40.978651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.344 qpair failed and we were unable to recover it. 00:25:58.344 [2024-12-11 15:02:40.978783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.344 [2024-12-11 15:02:40.978813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.344 qpair failed and we were unable to recover it. 00:25:58.344 [2024-12-11 15:02:40.978941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.344 [2024-12-11 15:02:40.978971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.344 qpair failed and we were unable to recover it. 00:25:58.344 [2024-12-11 15:02:40.979094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.344 [2024-12-11 15:02:40.979124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.344 qpair failed and we were unable to recover it. 00:25:58.344 [2024-12-11 15:02:40.979256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.344 [2024-12-11 15:02:40.979285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.344 qpair failed and we were unable to recover it. 00:25:58.344 [2024-12-11 15:02:40.979451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.344 [2024-12-11 15:02:40.979485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.344 qpair failed and we were unable to recover it. 00:25:58.344 [2024-12-11 15:02:40.979618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.344 [2024-12-11 15:02:40.979647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.344 qpair failed and we were unable to recover it. 00:25:58.344 [2024-12-11 15:02:40.979776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.344 [2024-12-11 15:02:40.979804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.344 qpair failed and we were unable to recover it. 00:25:58.344 [2024-12-11 15:02:40.979902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.344 [2024-12-11 15:02:40.979935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.344 qpair failed and we were unable to recover it. 00:25:58.344 [2024-12-11 15:02:40.980046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.344 [2024-12-11 15:02:40.980085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.344 qpair failed and we were unable to recover it. 00:25:58.344 [2024-12-11 15:02:40.980286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.344 [2024-12-11 15:02:40.980336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.344 qpair failed and we were unable to recover it. 00:25:58.344 [2024-12-11 15:02:40.980490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.344 [2024-12-11 15:02:40.980519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.344 qpair failed and we were unable to recover it. 00:25:58.344 [2024-12-11 15:02:40.980667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.344 [2024-12-11 15:02:40.980710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0de0000b90 with addr=10.0.0.2, port=4420 00:25:58.344 qpair failed and we were unable to recover it. 00:25:58.344 [2024-12-11 15:02:40.980848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.344 [2024-12-11 15:02:40.980891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.344 qpair failed and we were unable to recover it. 00:25:58.344 [2024-12-11 15:02:40.981139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.344 [2024-12-11 15:02:40.981169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.344 qpair failed and we were unable to recover it. 00:25:58.344 [2024-12-11 15:02:40.981318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.344 [2024-12-11 15:02:40.981346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.344 qpair failed and we were unable to recover it. 00:25:58.344 [2024-12-11 15:02:40.981463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.344 [2024-12-11 15:02:40.981493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.344 qpair failed and we were unable to recover it. 00:25:58.345 [2024-12-11 15:02:40.981662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.345 [2024-12-11 15:02:40.981692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.345 qpair failed and we were unable to recover it. 00:25:58.345 [2024-12-11 15:02:40.981791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.345 [2024-12-11 15:02:40.981820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.345 qpair failed and we were unable to recover it. 00:25:58.345 [2024-12-11 15:02:40.981965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.345 [2024-12-11 15:02:40.981994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.345 qpair failed and we were unable to recover it. 00:25:58.345 [2024-12-11 15:02:40.982117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.345 [2024-12-11 15:02:40.982146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.345 qpair failed and we were unable to recover it. 00:25:58.345 [2024-12-11 15:02:40.982292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.345 [2024-12-11 15:02:40.982320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.345 qpair failed and we were unable to recover it. 00:25:58.345 [2024-12-11 15:02:40.982448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.345 [2024-12-11 15:02:40.982477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.345 qpair failed and we were unable to recover it. 00:25:58.345 [2024-12-11 15:02:40.982580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.345 [2024-12-11 15:02:40.982611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.345 qpair failed and we were unable to recover it. 00:25:58.345 [2024-12-11 15:02:40.982715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.345 [2024-12-11 15:02:40.982743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.345 qpair failed and we were unable to recover it. 00:25:58.345 [2024-12-11 15:02:40.982833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.345 [2024-12-11 15:02:40.982861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.345 qpair failed and we were unable to recover it. 00:25:58.345 [2024-12-11 15:02:40.982961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.345 [2024-12-11 15:02:40.982990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.345 qpair failed and we were unable to recover it. 00:25:58.345 [2024-12-11 15:02:40.983073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.345 [2024-12-11 15:02:40.983102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.345 qpair failed and we were unable to recover it. 00:25:58.345 [2024-12-11 15:02:40.983205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.345 [2024-12-11 15:02:40.983236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.345 qpair failed and we were unable to recover it. 00:25:58.345 [2024-12-11 15:02:40.983327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.345 [2024-12-11 15:02:40.983356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.345 qpair failed and we were unable to recover it. 00:25:58.345 [2024-12-11 15:02:40.983487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.345 [2024-12-11 15:02:40.983515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.345 qpair failed and we were unable to recover it. 00:25:58.345 [2024-12-11 15:02:40.983647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.345 [2024-12-11 15:02:40.983678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.345 qpair failed and we were unable to recover it. 00:25:58.345 [2024-12-11 15:02:40.983765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.345 [2024-12-11 15:02:40.983794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.345 qpair failed and we were unable to recover it. 00:25:58.345 [2024-12-11 15:02:40.983888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.345 [2024-12-11 15:02:40.983926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.345 qpair failed and we were unable to recover it. 00:25:58.345 [2024-12-11 15:02:40.984061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.345 [2024-12-11 15:02:40.984089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.345 qpair failed and we were unable to recover it. 00:25:58.345 [2024-12-11 15:02:40.984179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.345 [2024-12-11 15:02:40.984212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.345 qpair failed and we were unable to recover it. 00:25:58.345 [2024-12-11 15:02:40.984338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.345 [2024-12-11 15:02:40.984372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.345 qpair failed and we were unable to recover it. 00:25:58.345 [2024-12-11 15:02:40.984508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.345 [2024-12-11 15:02:40.984537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.345 qpair failed and we were unable to recover it. 00:25:58.345 [2024-12-11 15:02:40.984706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.345 [2024-12-11 15:02:40.984735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.345 qpair failed and we were unable to recover it. 00:25:58.345 [2024-12-11 15:02:40.984888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.345 [2024-12-11 15:02:40.984918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.345 qpair failed and we were unable to recover it. 00:25:58.345 [2024-12-11 15:02:40.985044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.345 [2024-12-11 15:02:40.985073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.345 qpair failed and we were unable to recover it. 00:25:58.345 [2024-12-11 15:02:40.985171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.345 [2024-12-11 15:02:40.985201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.345 qpair failed and we were unable to recover it. 00:25:58.345 [2024-12-11 15:02:40.985347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.345 [2024-12-11 15:02:40.985384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0de0000b90 with addr=10.0.0.2, port=4420 00:25:58.345 qpair failed and we were unable to recover it. 00:25:58.345 [2024-12-11 15:02:40.985499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.345 [2024-12-11 15:02:40.985543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.345 qpair failed and we were unable to recover it. 00:25:58.345 [2024-12-11 15:02:40.985694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.345 [2024-12-11 15:02:40.985725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.345 qpair failed and we were unable to recover it. 00:25:58.345 [2024-12-11 15:02:40.985841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.345 [2024-12-11 15:02:40.985898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.345 qpair failed and we were unable to recover it. 00:25:58.345 [2024-12-11 15:02:40.986094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.345 [2024-12-11 15:02:40.986130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.345 qpair failed and we were unable to recover it. 00:25:58.345 [2024-12-11 15:02:40.986244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.345 [2024-12-11 15:02:40.986274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.345 qpair failed and we were unable to recover it. 00:25:58.345 [2024-12-11 15:02:40.986433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.345 [2024-12-11 15:02:40.986463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.345 qpair failed and we were unable to recover it. 00:25:58.345 [2024-12-11 15:02:40.986600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.345 [2024-12-11 15:02:40.986629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.345 qpair failed and we were unable to recover it. 00:25:58.345 [2024-12-11 15:02:40.986755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.345 [2024-12-11 15:02:40.986781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.345 qpair failed and we were unable to recover it. 00:25:58.345 [2024-12-11 15:02:40.986933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.345 [2024-12-11 15:02:40.986959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.345 qpair failed and we were unable to recover it. 00:25:58.345 [2024-12-11 15:02:40.987103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.345 [2024-12-11 15:02:40.987129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.345 qpair failed and we were unable to recover it. 00:25:58.345 [2024-12-11 15:02:40.987253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.345 [2024-12-11 15:02:40.987280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.345 qpair failed and we were unable to recover it. 00:25:58.345 [2024-12-11 15:02:40.987404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.345 [2024-12-11 15:02:40.987443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.345 qpair failed and we were unable to recover it. 00:25:58.345 [2024-12-11 15:02:40.987573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.345 [2024-12-11 15:02:40.987601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.345 qpair failed and we were unable to recover it. 00:25:58.345 [2024-12-11 15:02:40.987725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.345 [2024-12-11 15:02:40.987752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.345 qpair failed and we were unable to recover it. 00:25:58.345 [2024-12-11 15:02:40.987840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.345 [2024-12-11 15:02:40.987868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.345 qpair failed and we were unable to recover it. 00:25:58.345 [2024-12-11 15:02:40.987954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.345 [2024-12-11 15:02:40.987996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.346 qpair failed and we were unable to recover it. 00:25:58.346 [2024-12-11 15:02:40.988098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.346 [2024-12-11 15:02:40.988127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.346 qpair failed and we were unable to recover it. 00:25:58.346 [2024-12-11 15:02:40.988256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.346 [2024-12-11 15:02:40.988286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.346 qpair failed and we were unable to recover it. 00:25:58.346 [2024-12-11 15:02:40.988410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.346 [2024-12-11 15:02:40.988439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.346 qpair failed and we were unable to recover it. 00:25:58.346 [2024-12-11 15:02:40.988595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.346 [2024-12-11 15:02:40.988626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.346 qpair failed and we were unable to recover it. 00:25:58.346 [2024-12-11 15:02:40.988738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.346 [2024-12-11 15:02:40.988769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.346 qpair failed and we were unable to recover it. 00:25:58.346 [2024-12-11 15:02:40.988872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.346 [2024-12-11 15:02:40.988901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.346 qpair failed and we were unable to recover it. 00:25:58.346 [2024-12-11 15:02:40.989087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.346 [2024-12-11 15:02:40.989115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd8000b90 with addr=10.0.0.2, port=4420 00:25:58.346 qpair failed and we were unable to recover it. 00:25:58.346 [2024-12-11 15:02:40.989251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.346 [2024-12-11 15:02:40.989280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.346 qpair failed and we were unable to recover it. 00:25:58.346 [2024-12-11 15:02:40.989404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.346 [2024-12-11 15:02:40.989431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.346 qpair failed and we were unable to recover it. 00:25:58.346 [2024-12-11 15:02:40.989589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.346 [2024-12-11 15:02:40.989618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.346 qpair failed and we were unable to recover it. 00:25:58.346 [2024-12-11 15:02:40.989722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.346 [2024-12-11 15:02:40.989751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.346 qpair failed and we were unable to recover it. 00:25:58.346 [2024-12-11 15:02:40.989877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.346 [2024-12-11 15:02:40.989906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.346 qpair failed and we were unable to recover it. 00:25:58.346 [2024-12-11 15:02:40.990007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.346 [2024-12-11 15:02:40.990033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.346 qpair failed and we were unable to recover it. 00:25:58.346 [2024-12-11 15:02:40.990146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.346 [2024-12-11 15:02:40.990171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.346 qpair failed and we were unable to recover it. 00:25:58.346 [2024-12-11 15:02:40.990285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.346 [2024-12-11 15:02:40.990310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b6fa0 with addr=10.0.0.2, port=4420 00:25:58.346 qpair failed and we were unable to recover it. 00:25:58.346 [2024-12-11 15:02:40.990422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.346 [2024-12-11 15:02:40.990449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f0dd4000b90 with addr=10.0.0.2, port=4420 00:25:58.346 qpair failed and we were unable to recover it. 00:25:58.346 [2024-12-11 15:02:40.990567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.346 [2024-12-11 15:02:40.990615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connect